据IT之家消息,近日,AI安全公司DeepKeep针对Meta公司旗下Llama 2大语言模型进行了全面的安全性评估。在13个风险评估类别中,Llama 2模型仅通过了4项测试,其安全性问题引发了广泛关注。
DeepKeep的报告指出,Meta公司的Llama 2大模型在AI安全领域存在诸多隐患。特别是在70亿参数的Llama 2 7B模型中,幻觉问题尤为严重,幻觉率高达48%。幻觉现象表现为模型在回答问题时,提供的内容存在虚假或误导性信息,这一问题在AI模型中尤为重要,因为它可能导致用户对模型的错误依赖,甚至可能引发严重的安全问题。
Llama 2模型是Meta公司旗下的重要AI产品,其大规模的语言处理能力使其在多个领域具有广泛的应用潜力。然而,DeepKeep的评估报告表明,Meta公司在模型安全性方面的投入似乎还不够。
AI安全是当前AI技术发展的重要课题,尤其是在大规模语言模型应用日益广泛的背景下。Llama 2模型在安全性上的问题,不仅对Meta公司自身的产品和业务带来挑战,也向整个AI行业提出了更高的安全要求。
未来,Meta公司需要在大模型安全方面采取更为严格的标准和措施,确保其产品的安全性和可靠性。同时,这也将推动整个AI行业对AI安全问题的关注和投入,共同推动AI技术的健康发展。
英语如下:
**News Title:** “Meta’s Llama 2 AI Model Fails Majority of Security Tests”
**Keywords:** Llama 2 security, DeepKeep assessment, hallucination rate as high as 48%
**News Content:**
### Xinhua News Agency Report: Meta’s Llama 2 Large Model Security Raises Concerns, AI Security Company DeepKeep Releases Risk Assessment Report
According to IT Home, recently, AI security company DeepKeep conducted a comprehensive security assessment of Meta’s Llama 2 large language model. In 13 risk assessment categories, the Llama 2 model only passed four tests, raising widespread concerns about its security issues.
DeepKeep’s report指出,Meta’s Llama 2 large model has many hidden dangers in the field of AI security. Especially in the 7B model with 700 million parameters, the hallucination problem is particularly severe, with a hallucination rate as high as 48%. Hallucination refers to the phenomenon where the model provides false or misleading information when answering questions. This issue is particularly important in AI models as it may lead to users’ incorrect reliance on the model and could even cause serious security issues.
Llama 2 is an important AI product of Meta with its large-scale language processing capabilities having broad application potential in various fields. However, the assessment report by DeepKeep indicates that Meta seems to have not invested enough in model security.
AI security is an important topic in the development of AI technology, especially in the context of increasing applications of large-scale language models. The security issues of Llama 2 not only pose challenges to Meta’s own products and business but also set a higher security bar for the entire AI industry.
In the future, Meta will need to adopt stricter standards and measures in the security of large models to ensure the safety and reliability of its products. At the same time, this will also promote the AI industry’s attention and investment in AI security issues, jointly driving the healthy development of AI technology.
【来源】https://www.ithome.com/0/762/593.htm
Views: 7