Social Items

Google has released a set of principles to guide its work on artificial intelligence, unveiling its new AI ethics policy after it dropped out of the Pentagon's Project Maven project, which uses artificial intelligence to analyze unmanned aerial photographs, At the same time, it will continue to work with governments and the military in many other areas, according to Sundar Pichai, the company's chief executive.
The research giant has vowed not to develop artificial intelligence technologies that could be used in warfare or surveillance. Google's new policy focuses on developing artificial intelligence to build technologies responsibly and reduce any potential misuse, the company said in a blog published. Artificial intelligence technologies developed by them as a weapon or control, as well as the company's refusal to develop any artificial intelligence projects that may cause or are likely to cause public harm unless the benefits outweigh the risks substantially, but with safeguards.
The policy comes just days after the company announced that it would not renew its contract with the US Department of Defense to analyze the footage from unmanned aerial vehicles, after being pressured by its employees who sent a letter to the company's CEO asking to cancel their ongoing participation in the project Maven Project Maven , Although the company claimed that its participation was for non-offensive purposes, but some employees were afraid that such technologies could be used one day in actual warfare.
Google is one of the most influential companies in the field of artificial intelligence, by describing itself as an artificial intelligence company primarily, and owns a range of open source frameworks such as Kaggle and TensorFlow, and has a number of outstanding talents and researchers in this area, according to The company will continue to work with governments and the military in many other areas, including cybersecurity, training, military employment, veterans' health care and search and rescue.
The principles suggest that Google will also not develop surveillance projects that use artificial intelligence and violate internationally accepted standards or projects that run counter to widely accepted principles of international law and human rights, focusing on socially useful artificial intelligence research, which means avoiding unfair bias , Where they remain accountable to humans and subject to human control, while supporting high standards of scientific excellence, and integrating privacy safeguards.
"We use artificial intelligence in Google to make products more useful, starting with spam-free email, helping users create the required messages through the digital audio assistant that you can talk to naturally, and the images that bring out the fun things to enjoy," said Sunder Bechai. , And we recognize that this powerful technology raises equally important questions about how to use it, how to develop and use artificial intelligence and its great impact on society over the next several years. As a leader in artificial intelligence, T deep responsibility to achieve this. "
Google's decision to clarify its moral position on the development of artificial intelligence comes after years of concern about the imminent threat posed by automated systems, as well as more warnings about the development of artificial intelligence. A coalition of human rights and technology groups met last month to produce a document entitled "The Toronto Declaration" Toronto Declaration Calls on governments and technology companies to ensure that artificial intelligence respects the fundamental principles of equality and non-discrimination.
There have been many criticisms of the development of artificial intelligence by a wide range such as Elon Musk, founder of Tesla, the SpaceX, and Yann LeCun, a computer scientist specializing in machine learning, computer vision and robotics. Silicon Valley technology companies are now about to provide additional resources related to the research of the safety of artificial intelligence with the help of ethics-focused organizations such as Open AI Nonprofit Organization.

source

Google vows not to develop artificial intelligence weapons

Google has released a set of principles to guide its work on artificial intelligence, unveiling its new AI ethics policy after it dropped out of the Pentagon's Project Maven project, which uses artificial intelligence to analyze unmanned aerial photographs, At the same time, it will continue to work with governments and the military in many other areas, according to Sundar Pichai, the company's chief executive.
The research giant has vowed not to develop artificial intelligence technologies that could be used in warfare or surveillance. Google's new policy focuses on developing artificial intelligence to build technologies responsibly and reduce any potential misuse, the company said in a blog published. Artificial intelligence technologies developed by them as a weapon or control, as well as the company's refusal to develop any artificial intelligence projects that may cause or are likely to cause public harm unless the benefits outweigh the risks substantially, but with safeguards.
The policy comes just days after the company announced that it would not renew its contract with the US Department of Defense to analyze the footage from unmanned aerial vehicles, after being pressured by its employees who sent a letter to the company's CEO asking to cancel their ongoing participation in the project Maven Project Maven , Although the company claimed that its participation was for non-offensive purposes, but some employees were afraid that such technologies could be used one day in actual warfare.
Google is one of the most influential companies in the field of artificial intelligence, by describing itself as an artificial intelligence company primarily, and owns a range of open source frameworks such as Kaggle and TensorFlow, and has a number of outstanding talents and researchers in this area, according to The company will continue to work with governments and the military in many other areas, including cybersecurity, training, military employment, veterans' health care and search and rescue.
The principles suggest that Google will also not develop surveillance projects that use artificial intelligence and violate internationally accepted standards or projects that run counter to widely accepted principles of international law and human rights, focusing on socially useful artificial intelligence research, which means avoiding unfair bias , Where they remain accountable to humans and subject to human control, while supporting high standards of scientific excellence, and integrating privacy safeguards.
"We use artificial intelligence in Google to make products more useful, starting with spam-free email, helping users create the required messages through the digital audio assistant that you can talk to naturally, and the images that bring out the fun things to enjoy," said Sunder Bechai. , And we recognize that this powerful technology raises equally important questions about how to use it, how to develop and use artificial intelligence and its great impact on society over the next several years. As a leader in artificial intelligence, T deep responsibility to achieve this. "
Google's decision to clarify its moral position on the development of artificial intelligence comes after years of concern about the imminent threat posed by automated systems, as well as more warnings about the development of artificial intelligence. A coalition of human rights and technology groups met last month to produce a document entitled "The Toronto Declaration" Toronto Declaration Calls on governments and technology companies to ensure that artificial intelligence respects the fundamental principles of equality and non-discrimination.
There have been many criticisms of the development of artificial intelligence by a wide range such as Elon Musk, founder of Tesla, the SpaceX, and Yann LeCun, a computer scientist specializing in machine learning, computer vision and robotics. Silicon Valley technology companies are now about to provide additional resources related to the research of the safety of artificial intelligence with the help of ethics-focused organizations such as Open AI Nonprofit Organization.

source

No comments