柯推荐|人工智能是个胡说八道

2025年12月27日08:54:21其他揭露柯推荐|人工智能是个胡说八道已关闭评论362字数 14035阅读46分47秒阅读模式
摘要

人类从未像人工智能(AI)项目那样,投入如此多,却收获如此少。其设计基于这样一个假设:它试图模仿和超越的人类智能(HI)类似于自身的作协议。换句话说,人类接收数据并以可定义的方式处理,从而产生可理解的输出,这正是 HI 的本质。

柯推荐|人工智能是个胡说八道

AI answers questions, but it doesn’t ask them. Guest post by Robert Gore at Straight Line Logic

AI 会回答问题,但它不会主动提出。 作者:Robert Gore,Straight Line Logic

Never has humanity expended so much on an endeavor for which it will receive so little as the Artificial Intelligence (AI) project. Its design rests on the assumption that the human intelligence (HI) it is attempting to mimic and surpass is analogous to its own operating protocols. In other words, humans take in data and process it in definable ways that lead to understandable outputs, and that is the essence of HI.

人类从未像人工智能(AI)项目那样,投入如此多,却收获如此少。其设计基于这样一个假设:它试图模仿和超越的人类智能(HI)类似于自身的作协议。换句话说,人类接收数据并以可定义的方式处理,从而产生可理解的输出,这正是 HI 的本质。

AI designers reverse the scientific process of exploring reality and then defining, modeling, and perhaps deriving something useful from it, instead assuming that the reality of HI conforms to the AI model they’re building. It’s like expecting a clock to reveal the nature of time. This may seem surprising because among AI designers are some of the brightest people in the world. However, they demonstrate a profound lack of those qualities that might lead them to further understanding of HI: self-awareness, introspection, humility, wisdom, and appreciation of the fact that much of HI remains quite mysterious and may always remain so. Alas, some of them are just plain evil.

人工智能设计师颠倒了科学探索现实、定义、建模,甚至可能从中衍生出有用的东西的过程,而是假设智能的现实符合他们构建的人工智能模型。这就像指望钟表揭示时间的本质一样。这听起来可能令人惊讶,因为人工智能设计师中也有世界上最聪明的人。然而,他们表现出深刻缺乏那些可能引领他们进一步理解 HI 的品质:自我意识、内省、谦逊、智慧,以及对 HI 许多方面仍然神秘且可能永远如此的认识。可惜,有些人就是纯粹的邪恶。

AI looks backward. It’s fed and assimilates vast amounts of existing data and slices and dices it in myriad ways. Large language models (LLMs) can respond to human queries and produce answers based on assimilated and manipulated data. AI can be incorporated into processes and systems in which procedures and outcomes are dependent on data and logically defined protocols for evaluating it. Within those parameters, it has demonstrated abilities to solve problems (playing complex games, medical diagnosis, professional qualification exams, improving existing processes) that surpass HI. There is, of course, value in such uses of LLMs and AI, but that value derives from making some of the more mundane aspects of HI—data assimilation, manipulation, and optimization for use—better. Does that value justify the trillions of dollars and megawatts being devoted to AI? Undoubtedly not.

人工智能回顾过去。它会输入并同化大量现有数据,并以各种方式切割和切割。大型语言模型(LLMs)可以响应人类查询,并基于同化和作的数据生成答案。人工智能可以被纳入流程和系统中,其中程序和结果依赖于数据及其逻辑定义的评估协议。在这些参数范围内,它展现出超越 HI 的问题解决能力(玩复杂游戏、医学诊断、专业资格考试、改进现有流程)。当然,LLM 和 AI 的此类应用有其价值,但这种价值来自于提升 HI 中一些更为平凡的方面——数据同化、作和优化——的改进。这样的价值是否足以证明投入数万亿美元和兆瓦电力到人工智能上?毫无疑问,没有。

What AI can’t and won’t touch are the most interesting, important, and forward-facing aspects of HI, because no one has yet figured out how those aspects actually work. They are captured by the question: How does the human mind and soul generate the new? How does curiosity, theorization, imagination, creativity, inspiration, experimentation, improvisation, development, revision, and persistence come together to produce innovation? It’s ludicrous to suggest that we have even a rudimentary understanding of where the new comes from. Ask innovators and creators how they generated a new idea and you’re liable to get answers such as: an inspiration awakened them at three in the morning, or it came to them while they were sitting on the toilet. Model that! At root, the problem is that although AI can answer a seemingly infinite number of questions, it can’t ask a single one. It can be programmed to spot and attempt to resolve conflicts within data, but it doesn’t autonomously ask questions. From birth, the human mind is an autonomous question generator; it’s how we learn. That’s not confined to our species. Anyone who’s ever watched puppies or kittens can see that they have something akin to human curiosity. They explore their environments and are interested in anything new (if they’re not afraid of it). Curiosity and questions are the foundation of learning and intelligence. Reading even a page of something interesting or provocative will generate questions. Generative AI “reads” trillions of pages without an iota of curiosity. No one who either hails or warns of AI surpassing HI has explained how it will do so while bypassing the foundation of HI.

AI 不能也不会触及的是 HI 中最有趣、最重要、最前瞻性的方面,因为目前还没人弄清楚这些方面到底是如何运作的。他们被这样一个问题所困扰:人类的心灵和灵魂如何产生新事物?好奇心、理论化、想象力、创造力、灵感、实验、即兴、发展、修订和坚持如何结合起来产生创新?说我们对新事物的来源有哪怕最基本的理解是荒谬的。问问创新者和创作者他们是如何产生新想法的,你很可能会得到这样的答案:灵感在凌晨三点唤醒了他们,或者在他们坐在厕所时灵感突然出现。以身作则!根本问题在于,虽然人工智能可以回答看似无限的问题,但它却无法提出任何一个问题。它可以被编程去发现并尝试解决数据中的冲突,但它不会自主地提问。从出生起,人类的心智就是一个自主的问题产生者;这是我们学习的方式。这不仅限于我们物种。任何看过小狗或小猫的人都能看出它们有一种类似人类好奇心的东西。他们探索环境,对任何新事物都感兴趣(只要不害怕的话)。好奇心和提问是学习和智慧的基础。即使读一页有趣或发人深省的内容,也会引发疑问。生成式人工智能“阅读”了数万亿页,毫无好奇心。 无论是赞扬还是警告人工智能将超越 HI 的人,都没有解释过 AI 如何绕过 HI 的基础实现这一点。

Generative AI is supposedly going to generate something new by unquestioningly manipulating existing data. Even within that ambit AI is encountering perhaps insoluble problems. Model collapse refers to the degradation of AI models that are trained on AI generated output. Here’s an illustration:

生成式人工智能据说会通过无条件地控现有数据来生成新东西。即便如此,人工智能也面临着可能难以解决的问题。模型崩溃是指基于 AI 生成输出训练的 AI 模型的退化。这里有一个示意图:

柯推荐|人工智能是个胡说八道

Model Collapse: The Entire Bubble Economy Is a Hallucination,” Charles Hugh Smith, December 3, 2025
 模型崩溃:整个泡沫经济是幻觉 ”,查尔斯·休·史密斯,2025 年 12 月 3 日

HI generally gets better at something the more often it tries. AI degradation causes generative AI to generate hallucinations—nonsense. Which means one or more humans have to oversee AI to prevent such hallucinations. How many mini, non-obvious hallucinations fall through the cracks? No one knows.

HI 通常越频繁尝试,越擅长某件事。AI 的退化导致生成式 AI 产生幻觉——胡说八道。这意味着必须有一个或多个人类监督人工智能以防止此类幻觉。有多少小而不明显的幻觉会被漏掉?没人知道。

AI has been presented as a labor-saving miracle. But many businesses report a different experience: “work slop” — AI-generated content that looks polished but must be painstakingly corrected by humans. Time is not saved — it is quietly relocated.

人工智能被描绘成节省劳动力的奇迹。但许多企业报告了不同的体验:“工作杂草”——看起来精致但必须由人工细致修正的 AI 生成内容。时间不是被节省的——而是悄然转移。

Studies point to the same paradox:

研究也指出了同样的悖论:

• According to media coverage, MIT found that 95% of corporate AI pilot programs show no measurable ROI.

• 据媒体报道,麻省理工学院发现95%的企业人工智能试点项目没有可衡量的投资回报率。

• MIT Sloan research indicates that AI adoption can lead to initial productivity losses — and that any potential gains depend on major organizational and human adaptation.

• 麻省理工学院斯隆的研究显示,人工智能的采用可能导致初期生产力下降——而潜在的提升依赖于重大的组织和人类适应。

• Even McKinsey — one of AI’s greatest evangelists — warns that AI only produces value after major human and organizational change. “Piloting gen AI is easy, but creating value is hard.”

• 即使是迈克肯锡——人工智能最伟大的传教士之一——也警告说,人工智能只有在重大人类和组织变革后才会产生价值。“驾驶生成式人工智能很容易,但创造价值却很难。”

This suggests that AI has not yet removed human labor.
 It has hidden it — behind algorithms, interfaces, and automated output that still requires correction.
这表明人工智能尚未消除人类劳动。它隐藏了人工智能——用算法、界面和仍需修正的自动化输出。

AI, GDP, and the Public Risk Few Are Talking About,” Mark Keenan, December 1, 2025
 人工智能、GDP 与少有人谈论的公共风险 》,马克·基南,2025 年 12 月 1 日

frequently cited figure from S&P Global Market Intelligence is that 42 percent of companies have already scrapped their AI initiatives. The more dependent humans become on AI, the greater the danger AI degradation leads to HI degradation. Heavy usage of AI may make humanity net stupider

标普全球市场情报常引用的数据是,42%的公司已经放弃了他们的人工智能项目。人类对人工智能的依赖越深,人工智能退化导致高质量下降的风险就越大。大量使用人工智能可能会让人类变得更愚蠢

When AI works as envisioned, not detectably degrading, it processes vast amounts of often conflicting data. How does it resolve the conflicts? The resolution is primarily statistical—that which is most prevalent becomes what AI “learns.”

当人工智能按预期工作,没有明显的劣化时,它会处理大量常常相互矛盾的数据。它如何解决这些冲突?这一结论主要是统计学上的——最普遍的就是人工智能“学习”的内容。

From the vast data that serves as its training input, the LLM learns associations and correlations between various statistical and distributional elements of language: specific words relative to each other, their relationships, ordering, frequencies, and so forth. These statistical associations are based on the patterns of word usage, context, syntax, and semantics found within the training dataset. The model develops an “understanding” of how words and phrases tend to co-occur in varied contexts. The model does not just learn associations but also understands correlations between different linguistic elements. In other words, it discerns that certain words are more likely to appear in specific contexts.

通过作为训练输入的庞大数据,LLM 学习语言中各种统计和分布元素之间的关联和相关性:具体词汇之间的相对关系、它们的关系、顺序、频率等。这些统计关联基于训练数据集中词语使用、语境、句法和语义的模式。该模型能够“理解”词语和短语在不同语境中往往共现的现象。该模型不仅学习联想,还理解不同语言元素之间的相关性。换句话说,它识别某些词语更可能出现在特定语境中。

Theory Is All You Need: AI, Human Cognition, and Causal Reasoning,” Teppo Felin and Matthias Holweg, Strategy Science, December 3, 2024

“理论就是你所需要的一切:人工智能、人类认知与因果推理 ”,Teppo Felin 和 Matthias Holweg,《战略科学》,2024 年 12 月 3 日

AI output essentially represents consensus “knowledge” as measured by AI’s data surveying and statistical capabilities. What is defined as consensus may be an average weighted by the credentials and output of the various propagators of the data. It may, when it’s spitting out “answers,” note that the data conflicts and list alternative interpretations. However, aside from the fact that consensus, even weighted average consensus, is often wrong, there is a graver danger. Consensus wisdom is frequently the sworn enemy of innovation. Consensus-based AI may, on balance, retard more than it promotes innovation.

人工智能输出本质上代表了通过人工智能的数据调查和统计能力衡量的共识“知识”。所谓共识可能是根据数据传播者的资历和输出加权的平均值。当它在给出“答案”时,可能会注意到数据存在冲突,并列出了不同的解释。然而,除了共识,甚至加权平均共识,往往是错误的事实外,还有更严重的危险。共识智慧常常是创新的死敌。基于共识的人工智能总体上可能更多地阻碍了创新,而非促进创新。

Felin and Holweg use the example of: “heavier-than-air” human, powered, and controlled flight in the late 1800s and the early 1900s. Imagine if AI had been around in 1902, and the query was made: Is heavier-than-air human flight possible? The seemingly confident answer would have been: Definitely not! That was the overwhelming consensus of the experts, and AI would have reflected it. Had AI been guiding decision making—one of its touted abilities—it would have “saved” humanity from taking flight. Fortunately, Orville and Wilbur had abundant HI and they disregarded the so-called experts, an often intelligent strategy.

费林和霍尔维格举例来说:“重于空气”的人类、动力和受控飞行,发生在 19 世纪末和 20 世纪初 。想象一下,如果人工智能在 1902 年就已经存在,并且有人提出问题:人类能否实现比空气重的飞行?看似自信的答案是:绝对不是!这是专家们的压倒性共识,人工智能也会反映这一点。如果人工智能能引导决策——它宣称的能力之一——它本可以“拯救”人类免于飞翔。幸运的是,奥维尔和威尔伯拥有丰富的 HI,他们无视所谓的专家,这往往是明智的策略。

So, why is AI being pushed so hard? Why are all the “right” people in government, business, academia, and mainstream media so devoted to it? Why are trillions being spent as the stock market bubbles?

那么,为什么人工智能被如此强烈地推动呢?为什么政府、商界、学术界和主流媒体中所有“正确”的人都如此热衷于它?为什么股市泡沫期间花费了数万亿美元?

If the last few decades have taught us anything, it’s that when an official agenda doesn’t make sense, especially when it has an element of “official science,” start looking for the real reasons, the hidden agenda. The COVID response wasn’t about health and safety. The manufactured virus, lockdowns, closing businesses, masking, social distancing, discouraging or banning effective remedies, overwhelming pressure for vaccine uptake, ignoring adverse vaccine consequences up to and including death, and proposed vaccine passports enabled totalitarianism.

如果过去几十年教会了我们什么,那就是当官方议程不合逻辑,尤其是带有“官方科学”元素时,就要开始寻找真正的原因,隐藏的议程。新冠应对并非关于健康和安全。人为制造的病毒、封锁、关闭企业、戴口罩、保持社交距离、劝阻或禁止有效疗法、巨大疫苗接种压力、忽视包括死亡在内的不利后果,以及拟议中的疫苗护照,助长了极权主义。

Climate change has served the same purpose. Like AI, climate change “scientists” reverse the scientific process, insisting that reality conforms to their models. Operating in a protective bubble sustained by academia, the media, business, NGOs, governments, and multinational organizations, they’re hostile to the contrary evidence, questions, and criticism of their models that are essential to true science.

气候变化也起到了同样的作用。像人工智能一样,气候变化“科学家”颠倒了科学过程,坚持现实必须符合他们的模型。他们在学术界、媒体、商业、非政府组织、政府和跨国组织维系的保护泡沫中运作,对对其模型的反对证据、质疑和批评持敌视态度,而这些正是真正科学的关键。

And like climate change and COVID, AI has the totalitarians and would-be totalitarians drooling. Collecting, assimilating, and manipulating data is the technological foundation of a surveillance state. That’s all the technototalitarians (See “Technototalitarianism,” Parts OneTwo and Three) require of AI—all-encompassing data that can be sorted by every available metric, including ones for which citizens might pose a threat, rhetorically or otherwise, to the government. Some of them must know AI will never get close to HI, but that’s a useful claim, a selling point, to attract massive amounts of capital from Wall Street and support from the technototalitarian Trump administration.

就像气候变化和新冠疫情一样,人工智能让极权主义者和准极权者都垂涎三尺。收集、同化和控数据是监控国家的技术基础。这就是技术极权主义者(参见《技术极权主义》 第一  二  三部分)对人工智能的所有要求——涵盖所有可用指标的数据,包括那些公民可能以言辞或其他方式对政府构成威胁的指标。他们中有些人肯定知道人工智能永远无法接近 HI,但这是一个有用的说法,是一个卖点,用来吸引华尔街的大量资本和技术极权特朗普政府的支持。

Totalitarian empowerment is probably the main thing Trump understands about AI. Here he shares common ground with the Chinese government (although it undoubtedly knows far more about AI than Trump). The president has embraced AI, touting the Stargate project the day after he was inaugurated and now throwing the full weight of the government, its scientific laboratories, and its private sector technology “partners” behind the Genesis Mission, an effort, supposedly on a Manhattan Project scale, to incorporate AI into virtually everything. Should the states, with their pesky concerns about AI’s huge requirements for land, water, and energy, try to intervene, Trump just promulgated an executive order to federalize AI regulation.

极权赋权可能是特朗普对人工智能的主要理解。在这里,他与中国政府有共同点(尽管中国政府无疑比特朗普更了解人工智能)。总统已经拥抱人工智能,在就职后的第二天就大力宣传星际之门项目,现在则倾注政府、科学实验室和私营部门的技术“伙伴”全力支持创世纪任务 ,这项据称规模相当于曼哈顿计划的项目,旨在将人工智能融入几乎所有领域。如果各州因担心人工智能对土地、水和能源的巨大需求而试图干预,特朗普刚刚发布了一项行政命令,将人工智能监管联邦化 

It’s a Wall Street truism that governments jump on market trends when they’re about to end. AI hype has propelled AI stocks to dizzying heights. While few pundits and seers have questioned the flawed basic premise—that AI will completely surpass HI—some are starting to express concern about its staggering monetary and energy requirements and the circular nature of many of its financing arrangements. It would follow a long list of precedents if Trump’s Genesis Mission top-ticked AI. Perhaps it should have been named the Revelation Mission, after the last rather than the first book of the Bible.

华尔街的老生常谈是,政府在市场趋势即将结束时会抓住机会。人工智能热潮推动了人工智能股票飞速飙升。虽然很少有评论员和预言家质疑人工智能将完全超越 HI 这一错误的基本前提,但有人开始对其惊人的货币和能源需求以及许多融资安排的循环性质表示担忧。如果特朗普的创世任务成为人工智能的首选,这将遵循一长串先例。也许它应该被称为“启示使命”,取自圣经的最后一卷而非第一卷。

LAST MINUTE GIFTS DELIVERED BY CHRISTMAS
圣诞节前送达的临时礼物

An epic, AI-led stock market crash with a concomitant debt implosion would wipe out most of what’s reckoned wealth in America, plunging the nation into a depression. If the Genesis Mission makes the government a financial partner of the AI industry, or the industry is deemed “too big to fail,” taxpayers would be stuck with the tab. Many of AI’s promoters are on board with the you’ll-own-nothing-and-be-happy world our rulers envision. A crash would fit right in with their beyond-Orwellian agenda to impoverish and enslave America. Thus, they might regard this bubble that must inevitably pop as an AI feature, not a bug.

一场由人工智能主导的史诗级股市崩盘,伴随的债务崩溃将摧毁美国大部分财富,使国家陷入萧条。如果创世纪使命让政府成为人工智能产业的财务合作伙伴,或者该行业被认为“大到不能倒”,纳税人就得承担这笔税。许多人工智能的推动者支持我们统治者设想的那个“你一无所有且幸福”的世界。坠机正好符合他们超越奥威尔式的议程,旨在使美国贫困和奴役。因此,他们可能会把这个不可避免地破裂的泡沫视为 AI 的特性,而非漏洞。

If you query AI about AI, reflecting the consensus of experts it would assure you that AI is only for the good. Human intelligence says disregard the experts. Never has it been more important to think for yourself.

如果你向人工智能询问人工智能,反映专家共识,它会让你确信人工智能只是为了好事。人类情报说,别理会专家的意见。从未像现在这样为自己思考而重要。

来源:https://www.theburningplatform.com/2025/12/13/ai-is-a-crock/#more-385372

zhunbeizhuanbian
  • 本文由 发表于 2025年12月27日08:54:21
  • 除非特殊声明,本站文章均来自网络,转载请务必保留本文链接