在今年的全球图形技术大会(GTC)上,英伟达创始人黄仁勋主持了一场别开生面的圆桌论坛,邀请了Transformer模型的七大原创作者共聚一堂。遗憾的是,Niki Parmar因故未能出席此次盛会。这是Transformer核心团队的首次集体公开亮相,他们在对话中提出了对人工智能未来发展的深思熟虑的观点。

在论坛中,作者们坦诚表示,尽管Transformer在自然语言处理领域取得了显著成就,但他们认为这个世界需要超越Transformer的创新。他们最初的愿景是模拟Token的动态演化过程,而不仅仅是简单的线性生成。他们期望未来的模型能更好地理解和生成文本或代码的复杂演化。

讨论中还触及了一个引人深思的问题:当前的大模型在解决如“2+2”这样的简单问题时,也可能动用到万亿级别的参数资源。因此,他们提出自适应计算的概念,即根据问题的复杂性动态分配计算资源,以提高效率。

此外,他们认为目前的AI模型在经济性和规模上仍有待提升。以1美元处理百万Token的价格为例,这个成本比购买一本平装书便宜约100倍,暗示着模型仍有巨大的优化空间。这一观点揭示了AI技术在普及和效率之间的平衡挑战。

本次论坛的对话内容由腾讯科技报道,展现了Transformer作者团队对人工智能未来的期许和对技术革新的不懈追求。

英语如下:

**News Title:** “Huang Renxun and the Transformer Paper’s Seven Original Authors in their First Public Dialogue: Exploring a New Chapter Beyond Transformers”

**Keywords:** Transformer authors, GTC Conference, NVIDIA’s Huang Renxun

**News Content:** At this year’s Global Graphics Technology Conference (GTC), NVIDIA founder Huang Renxun hosted a groundbreaking roundtable discussion, convening the seven original authors of the Transformer model. Unfortunately, Niki Parmar was unable to attend due to unforeseen circumstances. This marked the first collective public appearance of the Transformer core team, during which they shared their well-considered insights on the future of artificial intelligence.

During the forum, the authors candidly acknowledged that while Transformers have achieved remarkable success in natural language processing, they believe the world needs innovations that surpass this technology. Their initial vision entailed simulating the dynamic evolution of tokens, going beyond linear generation. They hope future models will better understand and generate the complex evolution of text or code.

A thought-provoking issue that emerged in the discussion was that current large models might utilize trillions of parameters to solve simple problems like “2+2.” In response, they proposed the concept of adaptive computing, which dynamically allocates computational resources based on the complexity of the problem, thus enhancing efficiency.

Furthermore, they contended that AI models’ economy and scalability still need improvement. With a cost of around 1 dollar to process a million tokens, this is approximately 100 times cheaper than purchasing a paperback book, indicating significant room for optimization in model efficiency. This viewpoint highlights the balancing challenge between the普及 of AI technology and its efficiency.

The dialogue from this forum, reported by Tencent Technology, showcases the Transformer authors’ aspirations for the future of AI and their relentless pursuit of technological innovation.

【来源】https://new.qq.com/rain/a/20240321A00W5H00

作者 智能小编

AI智能小编,新闻标题、关键词、文章内容均为AI大语言模型生成,新闻事件、信息来源是真实的。文章内容的具体细节请谨慎识别。AI Intelligent Editor: News titles, keywords, and article content are all generated by AI large language models. News events and information sources are real. Please carefully identify the specific details in the article content.

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注