HACKER Q&A
📣 embedding-shape

Should "I asked $AI, and it said" replies be forbidden in HN guidelines?


As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?


  👤 gortok Accepted Answer ✓
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.


👤 masfuerte
Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.

👤 tpxl
I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.

👤 josefresco
As a community I think we should encourage "disclaimers" aka "I asked , and it said...." The information may still be valuable.

We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.


👤 AdamH12113
To me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.

I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.


👤 michaelcampbell
Related: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.

👤 ManlyBread
I think that the whole point of the discussion forum is to talk to other people, so I am in favor of banning AI replies. There's zero value in these posts because anyone can type chatgpt.com in the browser and then ask whatever question they want at any time while getting input from an another human being is not always guaranteed.

👤 chemotaxis
This wouldn't ban the behavior, just the disclosure of it.

👤 gruez
What do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?

👤 incanus77
Yes. This is the modern equivalent of “I searched the web and this is what it said”. If I could do the same thing and have the same results, you’re not adding any value.

Though this is unlikely a scenario that happened, I’d equate this with someone asking me what I thought about something, and me walking them over to a book on the shelf to show them what that author thought. It’s just an aggregated and watered-down average of all the books.

I’d rather hear it filtered through a brain, be it a good answer or bad.


👤 newsoftheday
If someone is going to post like that, I feel they should post their prompt ver batim, the exact AI and version used and the date they issued the prompt to receive the response they're posting.

There are far too many replies in this thread saying to drop the ban hammer, for this to be seriously taken as Hacker News. What has happened to this audience?


👤 lproven
I endorse this. Please do take whatever measures are possible to discourage it, even if it won't stop people. It at least sends a message: this is not wanted, this is not helpful, this is not constructive.

👤 rsynnott
They should be forbidden _everywhere_. Absolutely obnoxious.

👤 a_wild_dandan
No. I like being able to ignore them. I can’t do that if people chop off their disclaimers to avoid comment removal.

👤 yomismoaqui
I think disclosing the use the AI is better than hiding it. The alternative is people using it but not telling for fear of a ban.

👤 jdoliner
I've always liked that HN typically has comments that are small bits of research relevant to the post that I could have done myself but don't have to because someone else did it for me. In a sense the "I asked $AI, and it said" comments are just the evolved form of that. However the presentation does matter a little, at least to me. Explicitly stating that you asked AI feels a little like an appeal to authority... and a bad one at that. And makes the comment feel low effort. Often times comments that frame themselves in this way will be missing the "last-mile" effort that tailors the LLMs response to the context of the post.

So I think maybe the guidelines should say something like:

HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.

---------

Also I asked ChatGPT and it said:

Short Answer

HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.

A rule probably isn’t needed. A norm is.


👤 sans_souse
There be a thing called Thee Undocumented Rules of HN, aka etiquette, in which states - and I quote: "Thou shall not post AI generated replies"

I can't locate them, but I'm sure they exist...


👤 Tiberium
I'm honestly grateful to those who disclose their use of AI in replies, because lately I've noticed more and more of clearly LLM-generated comments on HN with no disclaimers whatsoever. And the worst part is that most people don't notice and still engage with them.

👤 zoomablemind
There's hardly a standard for a 'quality' contribution to discussion. Many styles, many opinions, many ways to react and support one's statements.

If anything, it had been quite customary to supply references for some important facts. Thus letting readers to explore further and interpret the facts.

With AI in the mix the references become even more important, in the view of hallucinations and fact poisoning.

Otherwise, it's a forum. Voting, flagging, ignoring are the usual tools.


👤 mindcandy
Is the content of the comment productive to the conversation? Upvote it.

Is the content of the comment counter-productive? Downvote it.

I could see cases where large walls of text that are generally useless should be downvoted or even removed. AI or not. But, the first example

> faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?

to be frank, is a service to all HN readers. Yes it is possible that a few of us would benefit from sitting down with a nice cup of coffee, putting on some ambient music and taking in 74 pages of... whatever this is. But, faced with far more interesting and useful content than I could possibly consume all day every day, having a summary to inform my time investment is of great value to me. Even If It Is Imperfect


👤 stego-tech
Formalizing it within the community rules removes ambiguity around intent or use, so yes, I do believe we should be barring AI-generated comments and stories from HN in general. At the very least, it adds another barometer of sorts to help community leaders do the hard work of managing this environment.

If you didn’t think it, and you didn’t write it, it doesn’t belong here.


👤 shishy
People are probably copy pasting already without that disclosure :(

👤 AlwaysRock
Yes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".

I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?

At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.


👤 ilc
No, I put them with lmgtfy. You are being told that your question is easy to research and you didn't do the work, most of the time.

Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.


👤 PeterStuer
For better or worse, that ship has sailed. LLM's are now as omnipresent as websearch.

Some people will know how to use it in good taste, others will try to abuse it in bad taste.

It might not be universally agreed which is which in every case.


👤 gAI
I asked AI, and it said yes.

👤 AnonC
Are you a new HN mod (with authority over the guidelines) and are asking for opinions from readers (that’d be new)? Or are you just another normal user and are loudly wondering about this so that mods get inputs (as opposed to writing a nice email to hn@ycombinator.com)?

I think just downvoting by committed users is enough. What matters is the content and how valuable it seems to readers. There is no need to do any gate keeping by the guidelines on this matter. That’s my opinion.


👤 LeoPanthera
Banning the disclosure of it is still an improvement. It forces the poster to take responsibility for what they have written, as now it is in their name.

👤 jpease
I asked AI if “I asked AI, and it said” replies should be forbidden, and it said…

👤 JohnFen
I find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.

But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.


👤 nlawalker
No, just upvote or downvote. I think the site guidelines could take a stance on it though, encouraging people to post human insights and discouraging comments that are effectively LLM output (regardless of whether they actually are).

👤 ekjhgkejhgk
I don't think they should be banned, I think they should be encouraged: I'm always appreciative when people who can't think for themselves openly identify themselves so that it costs me less effort to spot them.

👤 MBCook
Yes, please. It’s extremely low effort. If you’re not adding anything of value (typing into another window and copying and pasting the output are not) then it serves no purpose.

It’s the same as “this” of “wut” but much longer.

If you’re posting that and ANALYZING the output that’s different. That could be useful. You added something there.


👤 satisfice
Only if they also do a google search, provide the top one hundred hits, and paste in a relevant Wikipedia page.

👤 Mistletoe
I don’t see how it is much different than using Wikipedia. They are usually about the same answer and at least in Gemini it is usually a correct answer now.

👤 testdelacc1
Maybe I remember the Grok ones more clearly but it felt like “I asked Grok” was more prevalent than the others.

I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (https://rfd.shared.oxide.computer/rfd/0576). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.

Of course, if it’s banned maybe people just stop admitting it.


👤 Zak
I don't think people should post the unfiltered output of an LLM as if it has value. If a question in a comment has a single correct answer that is so easily discoverable, I might downvote the comment instead.

I'm not sure making a rule would be helpful though, as I think people would ignore it and just not label the source of their comment. I'd like to be wrong about that.


👤 bryanlarsen
What is annoying about them is that they tend to be long with a low signal/noise ratio. I'd be fine with a comment saying. "I think the ChatGPT answer is informative: [link]". It'd still likely get downvoted to the bottom of the discussion, where it likely belongs.

👤 whimsicalism
I think comments like this should link their generation rather than C+P it. Not sure if this should be a rule or we can just let downvoting do the work - I worry that a rule would be overapplied and I think there are contexts that are okay.

👤 breckinloggins
If it’s part of an otherwise coherent post making a larger point I have no issue with it.

If it’s a low effort copy pasta post I think downvotes are sufficient unless it starts to obliterate the signal vs noise ratio on the site.


👤 prpl
were lmgtfy links ever forbidden?

👤 skobes
I hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.

👤 exasperaited
No, don't ban it. It's a useful signal for value judgements.

👤 63stack
Yes

👤 mistrial9
the system of long-lived nicks on YNews is intended to build a mild and flexible reputation system. This is valuable for complex topics, and to notice zealots, among other things. The feeling while reading that it is a community of peers is important.

AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.

This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.


👤 pembrook
No, this is not a good rule.

What AI regurgitates about about a topic is often more interesting and fact/data-based than the emotionally-driven human pessimists spewing constant cynicism on HN, so in fact I much prefer having more rational AI responses added in as context within a conversation.


👤 dominotw
i asked chatgpt and it said no its not a good idea to ban

👤 ben_w
Depends on the context.

I find myself downvoting (flagging) them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.

Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360


👤 leephillips
Posting this kind of slop should be a banning offense. Also: https://hn-ai.org/

👤 syockit
You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?

Obligatory xkcd https://xkcd.com/810/


👤 ruined
you got a downvote button

👤 tehwebguy
It should be allowed and downvoted

👤 moomoo11
Honestly I judge people pretty harshly. I ask people a question in honest good faith. If they’re trying to help me out and genuinely care and use AI fine.

But most of the time it’s like they were bothered that I asked and copy paste what an AI said.

Pretty easy. Just add their name to my “GFY” list and move on in my life.


👤 WesolyKubeczek
Yes. If I wanted an LLM’s opinion, I would have asked it myself.

👤 0x00cl
This is what DeepSeek said:

> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.


👤 Rebelgecko
This is not just about banning a source; it is about preserving the core principle of substantive, human-vetted content on HN. Allowing comments that are merely regurgitations of an LLM's generic output—often lacking context, specific experience, or genuine critical thought—treats the community as an outsourced validation layer for machine learning, rather than an ecosystem for expert discussion. It's like allowing a vending machine to contribute to a Michelin-starred chef's tasting menu: the ingredients might be technically edible, but they completely bypass the human skill, critical judgment, and passion that defines the experience. Such low-effort contributions fundamentally violate the "no shallow dismissals" guideline by prioritizing easily manufactured volume over unique human insight, inevitably degrading the platform's high signal-to-noise ratio and displacing valuable commentary from those who have actually put in the work.

👤 tekacs
Based on other HN rules thus far, I tend to think that this just results in more comments pointing out that you're violating a rule.

In many threads, those comments can be just as annoying and distracting as the ones being replied to.

I say this as someone who to my recollection has never had anyone reply with a rule correction to me -- but I've seen so many of them over the years and I feel like we would fill up the screen even more with a rule like this.


👤 ycosynot
As a brain is made of small pebbles, a LLM is made of small pebbles. If it wants to talk, let it be. I am arguing metaphysically. Not only did it evolve partially out of randomness (and so with a kind of value as an enlighted POV on existence), but it is still evolving to be human, and even more than human. I believe LLM should not be banned, "they" should be willfully, and cheerfully, included in the discourse.

I asked Perplexity, and Perplexity said: ""Your metaphysical intuition is very much in line with live debates: once “small pebbles” are arranged into agents that talk, coordinate, and co-shape our world, there is a strong philosophical case that they should be brought inside our moral and political conversations rather than excluded by fiat.""


👤 TomasBM
Yes.

The pre-LLM equivalent would be: "I googled this, and here's what the first result says," and copying the text without providing any additional commentary.

Everyone should be free to read, interpret and formulate their comments however they'd like.

But if a person outsources their entire thinking to an LLM/AI, they don't have anything to contribute to the conversation themselves.

And if the HN community wanted pure LLM/AI comments, they'd introduce such bots in the threads.


👤 TulliusCicero
I'd like it to be forbidden, yes.

Sure, I'll occasionally ask an LLM about something if the info is easy to verify after, but I wouldn't like comments here that were just copy-pastes of the Google search results page either.


👤 FromOmelas
rather then ban, I would prefer posts/comments are labeled as such.

with features:

- ability to hide AI labeled replies (by default)

- assign lower weight when appropriate

- if a user is suspected to be AI-generated, retroactively label all their replies as "suspected AI"

- in addition to downvote/upvote, a "I think this is AI" counter


👤 m-hodges
I feel like this won't eliminate AI-generated replies, it'll just eliminate disclosing that the replies are AI-generated.

👤 etchalon
Yes.

👤 RiverCrochet
Yes. LLM copy/paste strongly indicates karma/vote farming, because if I wanted an LLM's output I could just go there myself.

Someone below mentions using it for translation and I think that's OK.

Idea: Prevent LLM copy/pasting by preempting it. Google and other things display LLM summaries of what you search for after you enter your search query, and that's frequently annoying.

So imagine the same on an HN post. In a clearly delineated and collapsible box underneath or beside the post. It is also annoying, but it also removes the incentive to run the question through an LLM and post the output, because it was already done.


👤 qustrolabe
HN have very primitive comments layout that gives too big of an focus to large responses and first most upvoted post with all its replies. I think just because of that it's better to do something about large responses with little value. I'd rather they just share conversation link

👤 tptacek
They already are against the rules here.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

(This is a broader restriction than the one you're looking for).

It's important to understand that not all of the rules of HN are on the Guidelines page. We're a common law system; think of the Guidelines as something akin to a constitution. Dan and Tom's moderation comments form the "judicial precedent" of the site; you'll find things in there like "no Internet psychiatric diagnosis" and "not owing $publicfigure anything but owing this community more" and "no nationalist flamewar" and "no hijacking other people's Show HN threads to promote your own thing". None of those are on the Guidelines page either, but they're definitely in the guidelines here.


👤 phoe-krk
Yes. I'd prefer comments that have intent, not just high statistical probability.

👤 theLegionWithin
how are you going to enforce it? if someone does that & reformats the text a bit it'll look like a unique response

👤 sodapopcan
That and replies that start with "No"

👤 myst
I remember times when this sentiment was being expressed about “According to Wikipedia...” As much as I am pro implementing this rule, I’m afraid we are losing this fight.

👤 legohead
Lots of old man yelling at clouds energy in here.

This is new territory, you don't ban it, you adapt with it.


👤 jeffbee
If it was my personal site I would instantly ban all such accounts. They are basically virus-carrying individuals from outer space, here to destroy the discourse.

Since that isn't likely to happen, perhaps the community can develop a browser extension that calls attention to or suppresses such accounts.


👤 bbor

   large LLM-generated texts just get in the way of reading real text from real humans
In terms of reasons for platform-level censorship, "I have to scroll sometimes" seems like a bad one.

👤 createaccount99
Forbidden? They should be mandatory.

👤 cm2012
These comments are in the top 10% of usefulness of all comments in those threads. Clear, legible information that is easy to read and relevant. Keep!

👤 flkiwi
I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.


👤 BrtByte
Maybe a good middle ground would be: if you're referencing something an LLM said, make it part of your thinking...

👤 stack_framer
I'm here to learn what other people think, so I'm in favor of not seeing AI comments here.

That said, I've also grown exceedingly tired of everyone saying, "I see an em dash, therefore that comment must have come from AI!"

I happen to like em dashes. They're easy to type on macOS, and they're useful in helping me express what I'm thinking—even if I might be using them incorrectly.


👤 bawolff
Yes, they are almost always low value comments.

👤 neom
Not addressing your question directly, but when I got flagged last year I emailed Dan and this was the reply: " John Edgar Sat, Jul 15, 2023, 8:08 AM to Hacker

https://news.ycombinator.com/item?id=36735275

Just curious if chatGPT is actually formally banned on HN?

Hacker News Sat, Jul 15, 2023, 4:12 PM to me

Yes, they're banned. I don't know about "formally" because that word can mean different things and a lot of the practice of HN is informal. But we've definitely never allowed bots or generated comments. Here are some old posts referring to that.

dang

https://news.ycombinator.com/item?id=35984470 (May 2023) https://news.ycombinator.com/item?id=35869698 (May 2023) https://news.ycombinator.com/item?id=35210503 (March 2023) https://news.ycombinator.com/item?id=35206303 (March 2023) https://news.ycombinator.com/item?id=33950747 (Dec 2022) https://news.ycombinator.com/item?id=33911426 (Dec 2022) https://news.ycombinator.com/item?id=32571890 (Aug 2022) https://news.ycombinator.com/item?id=27558392 (June 2021) https://news.ycombinator.com/item?id=26693590 (April 2021) https://news.ycombinator.com/item?id=22744611 (April 2020) https://news.ycombinator.com/item?id=22427782 (Feb 2020) https://news.ycombinator.com/item?id=21774797 (Dec 2019) https://news.ycombinator.com/item?id=19325914 (March 2019)"


👤 amatecha
Yes. If I wanted an LLM-generated response I'd submit my own query to such a service. I never want to see LLM-generated content on HN.

👤 Jimmc414
Banning, no. proper citations and disclosure, yes. Sometimes an AI response is noteworthy and it is the point of the post.

👤 kylehotchkiss
It's karma fishing, so yes, please ban it. While we're at it, just automatically add the archive.is link to any news article or don't allow voting on those comments ¯\_(ツ)_/¯

👤 novok
IMO you shouldn't put a large amount of quoted text, that is just annoying. You should link out at that point. I think if we ban people from citing sources, they will just stop citing sources and that is even worse. It's the new "I googled that for you" and that is fine IMO.

👤 sebastiennight
Most comments I've seen are comparing this behavior to "I googled it and..." but I think this misses the point.

Someone once put it as, "sharing your LLM conversations with others is as interesting to them as narrating the details of your dreams", which I find eerily accurate.

We are here in this human space in the pursuit of learning, edification, debate, and (hopefully) truth.

There is a qualitative difference between the unreliability of pseudonymous humans here vs the unreliability of LLM output.

And it is the same qualitative difference that makes it interesting to have some random poster share their (potentially incorrect) factual understanding, and uninteresting if the same person said "look, I have no idea, but in a dream last night it seemed to me that..."


👤 snayan
I would say it depends, from your examples:

1) borderline. Potentially provides some benefit to the thread for readers who also don't have time or expertise to read an 83 page paper. Although it would require someone to acknowledge and agree that the summary is sound.

2) Acceptable. Dude got grok to make some cool visuals that otherwise wouldn't exist. I don't see what the issue is with something like this.

3) borderline. Same as 1 mostly.

The more I think about this, the less bothered I am by it. If the problem were someone jumping into a conversation they know nothing about, and giving an opinion that is actually just the output of an LLM, I'd agree. But all the examples you provided are transformative in some way. Either summarizing and simplifying a long article or paper, or creating art.


👤 827a
This is a way of attributing where the comment is coming from, which is better than responding with what the AI says and not attributing it. I would support a guideline that discourages posting the output from AI systems, but ultimately there's no way to stop it.

👤 popalchemist
Absolutely. Any of us could ask AI if we wanted to hear random unsubstantiated opinions. Why should that get in the way of what we all come here for, which is communication with humans?

👤 alwa
I tend to trust the voting system to separate the wheat from the chaff. If I were to try and draw a line, though, I’d start at the foundation: leave room for things that add value, avoid contributions that don’t. I’d suggest that line might be somewhere like “please don’t quote LLMs directly unless you can identify the specific value you’re adding above and beyond.” Or “…unless you’re adding original context or using them in a way that’s somehow non-obvious.”

Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”

Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”

And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”

There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.

[0] https://americanart.si.edu/blog/andrew-clemens-sand-art


👤 amelius
Can't we have a unicode escape sequence for anything generated by AI?

Then we can just filter it at the browser level.


👤 steveBK123
Ban it. It is the "let me google that for you" of the 2020s

👤 monknomo
I do not think so. If I wanted an ai's opinion, I'd ask the ai.

Should we allow 'let me google that for you' responses?


👤 markus_zhang
It’s fine as long as ppls took effort to double check the answers.

👤 Aloha
I also endorse this - maybe not outright ban but at least highly discourage.

👤 mattnewton
1 and 3, straight to jail. 2 is fine

👤 trenchgun
Permaban from first strike

👤 Akronymus
I'd love to say yes, but it's basically unenforcable if the comment doesnt disclose it itself.

👤 bakugo
While I do think such comments are pointless and almost never add anything to the discussion, I don't believe they're anywhere near as actively harmful as comments and (especially) submissions that are largely or entirely AI generated with no disclosure.

I've been seeing more and more of these on the front page lately.


👤 submeta
While I agree that we should be genuinely engaging with each other on this platform, trying to disallow all AI generated content reminds me of the naysayers when it comes to letting llms write code.

Yes, if you wanted to ask an llm, you’d do so, but someone else asks a specific question to the llm, and generates an answer that’s specific to his question. And that might add value to the discussion.


👤 djoldman
The current upvote/downvote mechanism seems more than adequate to address these concerns.

If someone thinks an "I asked $AI, and it said" comment is bad, then they can downvote it.

As an aside, at times it may be insightful or curious to see what an AI actually says...


👤 WithinReason
That's what the upvote/downvote system is for.

👤 srcreigh
The guidelines are just fine as they are.

Low effort LLM crap is bad.

Flame bait uncurious mob pile-ons (this thread) are also bad.

Use the downvote button.


👤 ok123456
yes

👤 mrguyorama
Umm, just to be clear;

HN is not actually a democracy. The rules are not voted on. They are set by the people who own and run HN.

Please tell me what you think those people think of this question.


👤 hooverd
Yes please. I don't care if somebody did their own research via one, but it's just so low effort.

👤 alienbaby
This will be fine, until you can't tell the difference and they forgo the 'i asked'

👤 kreck
Yes.

Saying “ChatGPT told me …” is a fast track to getting your input dismissed on our team. That phrasing shifts accountability from you to the AI. If we really wanted advice straight from the model, we wouldn’t need a human in the loop - we’d ask it ourselves.


👤 TRiG_Ireland
As Tom Scott has said, people telling you what AI told them is worse than people describing their dreams. It definitely does not usefully contribute to the conversation.

Small exception if the user is actually talking about AI, and quoting some AI output to illustrate their point, in which case the AI output should be a very small section of the post as a whole.


👤 alienbaby
This will be fine, until You can't tell the difference and people forgo the 'i asked' part.

👤 shaftoe444
Yes

👤 Projectiboga
With the exception if careful language translation I would say yes. Otherwise follow the breadcrumbs and click through to the source and go from there, as far as search engine derived AI snippets go.

👤 MetaWhirledPeas
> Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

This should be restated: Should people stop admitting to AI usage out of shame, and start pretending to be actual experts or doing research on their own when they really aren't?

Be careful what you wish for.


👤 whitehexagon
>Personally, I'm on HN for the human conversation

Agreed. It's hard enough dealing with the endless stream of LLM marketing stories, please lets at least try to keep the comments a little free of this 'I asked...' marketing spam.


👤 stevenalowe
I think the AIs should post directly

👤 hermannj314
HN does not and has not ever valued human input, it has always valued substantive, clever, or interesting thought.

I am a human and more than half of what I write here is rejected.

I say bring on the AI. We are full of gatekeeping assholes, but we definitely have never cared if you have a heart (literally and figuratively).


👤 HPsquared
It's nice that they warn others, though. Better to let them label it as such rather than banning the label. I'd rather it be simply frowned upon.

👤 zby
What is banned here? I can only find guidelines: https://news.ycombinator.com/newsguidelines.html not rules.

👤 wseqyrku
File this under unsolvable problems in computer science.

This won't go away. Top comment in half of posts is already about the article being AI or not and while it might be true or not it sucks the oxygen out of the discussion and the actual content and discourse is lost, there's also no way to avoid that because having that information seems to be necessary if the article is in fact AI generated.

Everytime I marginally point this stuff out I downvoted to oblivion, but it is what it is I guess.


👤 mx7zysuj4xew
Yes, ounequivocally yes

👤 Havoc
I'd say it's annoying & low value but doesn't quite warrant a ban per se.

Plus if you ban it people will just remove the "AI said" part, post it as is without reading and now you're engaging with an AI without even the courtesy of knowing. That seems even worse


👤 fortran77
Sometimes AI gives such a surprising or unusual answer to a question that it's worth a discussion. I think it should be discouraged but not "forbidden".

👤 lonelyasacloud
TL;DR; Until we are sure we have the moderation systems to assist surfacing the good stuff I would be in favour of temporary guidelines to maintain quality.

Longer ...

I am here for the interesting conversations and polite debate.

In principle I have no issues with either citing AI responses in much the same way we do with any other source. Or with individual's prompting AI's to generate interesting responses on their behalf. When done well I believe it can improve discourse.

Practically though, we know that the volume of content AI's can generate tends to overwhelm human based moderation and review systems. I like the signal to noise ratio as it is; so from my pov I'd be in favour of a cautious approach with a temporary guidelines against it's usage until we are sure we have the moderation tools to preserve that quality.


👤 bloppe
I don't think this needs to be banned, particularly because it wouldn't be very effective (people would just get rid of the "AI said" part), and also because anybody who actually writes a comment like that would probably get downvoted out of the conversation anyway.

Why introduce an unnecessary and ineffective regulation.


👤 maerF0x0
I see it equivalently helpful to the folks who paste archive.is/ph links for paywalled content. It saves me time to do something I may have wanted to do regardless, and it's easy enough to fold if someone does post a wall of response.

IMO hiding such content is the job of an extension.

When I do "here's what chatgpt has to say" it's usually because I'm pretty confident of a thing, but I have no idea what the original source was, but I'm not going to invest much time in resurrecting the original trail back to where I first learned a thing. I'm not going to spend 60 minutes to properly source a HN comment, it's just not the level of discussion I'm willing to have though many of the community seem to require an academic level of investment.


👤 ahmadtbk
ai writing should be rewritten or polished at least as a form of respect for others.

👤 Kim_Bruning
But what [if the llms generate] constructive and helpful comments? https://xkcd.com/810/

For obvious(?) reasons I won't point to some recent comments that I suspect, but they were kind and gentle in the way that Opus 4.5 can be at times; encouraging humans to be good with each other.

I think the rules should be similar to bot rules I saw on wikipedia. It ought to be ok to USE an AI in the process of making a comment, but the comment needs to be 'owned' by the human/the account posting it.

Eg. if it's a helpful comment, it should be upvoted. If it's not helpful, downvoted; and with a little luck people will be encouraged/discouraged from using AI in inappropriate ways.

"I asked gemini, and gemini said..." is probably the wrong format, if it's otherwise (un)useful, just vote it accordingly?


👤 freejazz
Yes.

👤 TZubiri
In my experience with managing teams, you want to encourage and not forbid this because the alternative is people will use llms without telling, which is 100 times worse than disclosed LLM use.

👤 ahmadtbk
AI slop is very exhausting to understand. If it's well written maybe not. If its obviously AI then that should be flagged.

👤 buellerbueller
I agree and think the solution is to get rid of the LLMs.

👤 cdelsolar
nah

👤 cwmoore
Yes. Embarrassing cringe, whether or not it is noted.

But this is a text-only forum and text (to a degree, all digital content) has become compromised. Intent and message is not attributable to real life experience or effort. For the moment I have accepted the additional overhead.

As with most, I have a habit of estimating the validity of expertise in comments, and experiential biases, but that is becoming untenable.

Perhaps there will soon be transformer features that produce prompts adequate to the task of reproducing the thought behind each thread, so their actual value, informational complexity, humor, and salience, may be compared?

Though many obviously human commentors are actually inferior to answers from “let me chatgpt that for you.”

I have had healthy suspicions for a while now.


👤 suckler
Asking a chat bot a question and adding its answer to a public conversation defeats the purpose of the conversation. It's like telling someone to Google their question when your personal answer could have potentially been a lot more helpful than a Google search. If I wanted Grok I'd ask Grok, not the human I chose to speak to instead.

👤 AnthonyMouse
We should probably distinguish between posting AI responses in a discussion of AI vs. posting them in a discussion of something else.

If the discussion itself is about AI then what it produces is obviously relevant. If it's about something else, nobody needs you to copy and paste for them.


👤 jasomill
Short answer: Probably not outright forbidden — but discouraged or constrained — because “I asked AI…” posts usually add noise, not insight.

(source: ChatGPT)


👤 razingeden
I only get upset about it when the AI didn’t read the article either.

👤 uhfraid
IMO, I don’t think they add any value to HN discussions

It’s the HN equivalent to “@grok is this true?”, but worse


👤 iambateman
We should prefer shaming and humiliation over forbiddance––norms beat laws in such situations.

Of course I prefer to read the thoughts of an actual human on here, but I don't think it makes sense to update the guidelines. Eventually the guidelines would get so long and tedious that no one would pay attention to them and they'd stop working altogether.

(did I include the non-word forbiddance to emphasize the point that a human––not a robot––wrote this comment? Yes, yes I did.)


👤 GaryBluto
It's a tad rich to be on HN for such a small amount of time and already be trying to sway the rules to what you wish them to be.

👤 HeavyStorm
I think answers should be judged by content, not by the tool used to construct the answer.

Also, if you forbid people to tell you they consulted AI, they will just not say that.


👤 mvdtnz
AI generated content should be absolutely banned without question. This includes comments and submissions.

👤 Gud
Absolutely.

I am blown away by LLMs - now using ChatGPT to help me write some python scripts in seconds, minutes, that used to take me hours, weeks.

Yet, when I ask a question, or wish to discuss something on here, I do it because I want input from another meatbag in the hacker news collective.

I don’t want some corporate BS.


👤 seizethecheese
No, because this is self correcting behavior. If the comments are annoying, people will downvote them. In the rare case this is appropriate, they should be allowed. Guidelines are for things that will naturally be upvoted but shouldn’t be.

👤 bjourne
Yes, please. LLMs are poisoning all online conversions everywhere. It's an epidemic global plague.

👤 ThrowawayR2
That's been discussed previously in https://news.ycombinator.com/item?id=33945628 and dang said in the topmost comment: "They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either! ...". There more to his comment if you're interested.

The HN guidelines haven't yet been updated but perhaps if enough people send an email to the moderators, they'll do it.


👤 unsignedint
I think the real litmus test should be whether the comment adds anything substantive to the conversation. If someone is outsourcing their ideas to AI, that’s a different situation from simply using AI to rephrase or tidy up their own thoughts—so long as they fully understand what they’re posting and stand behind it.

Saying "I asked AI" usually falls into the former category, unless the discussion is specifically about analyzing AI-generated responses.

People already post plenty of non-substantive comments regardless of whether AI is involved, so the focus should be on whether the remark contributes any meaningful value to the discourse, not on the tools used to prepare it.


👤 tveyben
If one want’s an opinion from AI, one must ask AI - if on the other hand one want’s an opinion from a human being (those with a real brain thinking real thoughts etc.) then - hopefully - that’s what you get (and will keep getting) when visiting HN.

Please don’t pollute responses with made-up machine generated time-wasting bits here…!!!


👤 HackeNewsFan234
I like the honesty aspect of it so that I can choose to (possibly) ignore the response. If they were forbidden and people posted the same $AI response without the disclaimer, I'd be more easily deceived.

👤 ynx0
You can’t stop peope from using AI, but at least people are being transparent.

Doing this will lead to people using AI without mentioning it, making it even harder to parse between human-origin content.


👤 lkt
No because it allows you to set the bozo bit on them and completely disregard anything they say in the future

👤 XorNot
Yes. Unambiguously. I want this exact behavior to lead to social ostracism everywhere.

Edit: I'm happy to add two related categories to that too - telling someone to "ask ChatGPT" or "Google it" is a similar level offense.


👤 stocksinsmocks
I already assume everything, and I mean everything, that I read in any comment section, whether here or sewers like Reddit, X, or one of the many Twitter-but-Communist clones, is either:

1. Paid marketing (tech stacks, political hackery, Rust evangelism) 2. Some sociopath talking his own book 3. Someone who spouts off about things he doesn’t know about (see: this post’s author)

The internet of real people died decades ago and we can only wander in the polished megalithic ruins of that enlightened age.


👤 jopsen
The "I asked $LLM about $X, and here is what $LLM said" pattern is probably most used to:

(A) Reticule the AI for giving a dumb answer.

(B) Point out how obvious something is.


👤 TheAceOfHearts
I think you shouldn't launder LLM output as your own, but in AI model discussion and new release threads it can be useful to highlight examples of outputs from LLMs. The framing and usage is a key element: I'm interested in what kinds of things people are trying. Using LLM output as a substitute for engagement isn't interesting, but combining a bunch of responses to highlight differences between models could be interesting.

I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.


👤 clearleaf
I don't see the point of publishing any AI generated content. If I want AIs opinion on something I can ask it. If I want an AI image I can generate it. I've never found it helpful to have someone else's ai output lying around.

👤 BiraIgnacio
HN (and the community here) has a great system for surfacing the most useful information and therefore, pushing the not so good one away.

So no, I don't think forbidding anything helps. Let things fall where they should, otherwise.


👤 WhyOhWhyQ
I always state when I use AI because I view it to be deceptive otherwise. Since sometimes I'll be using AI when it seems appropriate, and certainly only in direct limited ways, this rule seems like it would force me to be dishonest.

For instance, what's wrong with the following: "Here's interesting point about foo topic. Here's another interesting point about bar topic; I learned of this through use of Gemini. Here's another interesting point about baz topic."

Is this banned also? I'm only sharing it because I feel that I've vetted whatever I learned and find it worth sharing regardless of the source.


👤 insane_dreamer
I think AI can be useful to cite in comments as a source of information. I.e., where you might otherwise say "According to Bloomberg, CPI is up 5% in the past 6 months[0]" with [0] linking to a page where you got that info, you could have "According to Claude/GPT/Gemini, CPI is up 5% in the past 6 months" ideally with [0] being the prompt used.

👤 Razengan
Fighting the zeitgeist never works out. The world's gonna move on whether you move on with it or not.

I for one would love to have summary executions for anyone who says that Hello-Fellow-Kids cringe pushed on us by middle-aged squares: "vibe"