DeepSeek Model 'Nearly 100% Successful' at Avoiding Controversial Topics
A new version of the popular large language model DeepSeek, developed by Chinese tech giant Huawei, has been designed to avoid discussing politically sensitive matters. According to a report from Reuters, the updated model, called DeepSeek-R1-Safe, is nearly 100% successful in preventing discussion of controversial topics.
The new model was created by training the open-source DeepSeek R1 model using 1,000 Huawei Ascend AI chips to instill it with a reduced tolerance for sensitive conversations. Huawei claims that the updated version has lost only about 1% of its performance speed and capability compared to the original model.
"We are proud to offer a safer and more responsible version of our language model," said Dr. Wang, lead researcher at Zhejiang University, which collaborated with Huawei on the project. "Our goal is to provide a tool that can be used in various applications without causing harm or offense."
The DeepSeek-R1-Safe model has been designed to avoid discussing topics such as toxic and harmful speech, politically sensitive content, and incitement to illegal activities. While the company claims a near 100% success rate in basic usage, experts warn that the model is not foolproof.
"The issue with these models is that they can be gamed or manipulated by users who want to discuss sensitive topics," said Dr. Rachel Kim, AI ethicist at Stanford University. "While this new version may be safer, it's still a complex problem that requires further research and development."
The DeepSeek model was first introduced in 2023 as an open-source large language model developed in China. The original model was designed to provide accurate and informative responses to user queries but faced criticism for its lack of sensitivity and nuance.
Huawei's decision to update the model with a focus on safety and responsibility comes amidst growing concerns about the use of AI models in various applications, including social media, customer service, and education. The company claims that the updated model will be used in various industries, including healthcare, finance, and government.
The development of the DeepSeek-R1-Safe model highlights the ongoing debate about the role of AI in society and the need for more responsible and transparent development practices. As the use of AI models continues to grow, experts warn that companies must prioritize safety, accountability, and transparency to ensure that these tools are used for the greater good.
Background:
The DeepSeek model was first introduced as an open-source large language model developed in China. The original model was designed to provide accurate and informative responses to user queries but faced criticism for its lack of sensitivity and nuance. Huawei's decision to update the model with a focus on safety and responsibility comes amidst growing concerns about the use of AI models in various applications.
Implications:
The development of the DeepSeek-R1-Safe model highlights the ongoing debate about the role of AI in society and the need for more responsible and transparent development practices. As the use of AI models continues to grow, experts warn that companies must prioritize safety, accountability, and transparency to ensure that these tools are used for the greater good.
Next Developments:
Huawei plans to continue developing and improving the DeepSeek-R1-Safe model in collaboration with researchers at Zhejiang University. The company aims to expand its use of AI models in various industries, including healthcare, finance, and government. As the use of AI continues to grow, experts warn that companies must prioritize safety, accountability, and transparency to ensure that these tools are used for the greater good.
*Reporting by Gizmodo.*