OpenAI的Sora模型旨在通过文本生成逼真的视频,但用户发现生成一分钟视频需要超过一小时的渲染时间。这一发现引发了Reddit社区的热议,用户们对研究人员仅展示预设示例、不开放自定义提示词的功能表示不满,且最长演示视频仅17秒,引发了公众对模型实际应用前景的担忧。
尽管Sora模型在技术上展现了巨大的潜力,但用户体验上的限制和渲染时间过长的问题,无疑限制了该模型的广泛应用。有评论指出,如果一个模型在实际应用中无法提供快速、高效的结果,那么它的实用价值将大打折扣。
对此,OpenAI方面尚未做出回应,但业界普遍认为,模型性能的提升和用户体验的优化是未来发展的重要方向。随着人工智能技术的不断进步,我们有理由相信,Sora模型和其他类似技术将会在不久的将来提供更加高效和用户友好的解决方案。
英语如下:
Title: “OpenAI’s New Video-Generation Model Sora Renders One Minute in One Hour”
Keywords: AI video generation, long rendering times, limited to demonstrations
News Content:
OpenAI’s Sora model is designed to generate realistic videos from text descriptions, but users have found that rendering a one-minute video takes over an hour. This revelation has sparked heated discussion on Reddit, where users are dissatisfied with the researchers’ decision to only showcase pre-defined examples and not allow custom prompt words, and the longest demonstration video is only 17 seconds, raising concerns about the model’s practical application prospects.
While the Sora model demonstrates significant technical potential, the limitations in user experience and the lengthy rendering times undoubtedly restrict the model’s widespread application. Commentators note that if a model cannot provide fast and efficient results in practical applications, its practical value will be significantly diminished.
As of yet, OpenAI has not issued a response, but the industry generally believes that improving model performance and optimizing user experience are important directions for future development. As artificial intelligence technology continues to advance, there is reason to believe that Sora and similar technologies will offer more efficient and user-friendly solutions in the near future.
【来源】https://www.ithome.com/0/751/364.htm
Views: 6
