The White House recently issued guidelines for the development and use of artificial intelligence by federal agencies.
While privacy and security concerns affect the increasing use and development of AI tools, the White House appears to be taking a gentler approach to regulating technology.
In its memorandum published on Wednesday, the White House said: "Federal agencies must avoid regulatory or non-regulatory measures that unnecessarily hinder innovation and AI growth."
Simply put, artificial intelligence is a branch of computer science that trains machines or software to operate and solve problems like people.
"Artificial intelligence is building systems that do things that people think are intelligent," said Kristian Hammond, a computer science professor at Northwestern University. "So things that read, produce text, learn, drive, roam the world and answer questions – all of these are things that are central to human thinking."
AI enables intelligent loudspeakers to recognize human commands and self-driving vehicles, for example for hands-free calling. Many of these systems are supplied with large amounts of data and instructions in a so-called deep learning process, which helps the software to interpret and respond to new stimuli.
The White House memo was released almost a year after President Donald Trump signed a decree establishing the AI American Initiative. Hammond said that it essentially formalized a hands-off approach to the technology.
"They basically say," Look, don't do anything that stands in your way, "said Hammond." The idea behind these guidelines is that they are technologies that are beneficial to both trade and security. "
But an unfettered AI technology could lead to worrying results. In a November 2019 article, the New York Times revealed how the Chinese government is using surveillance cameras and facial recognition software to track the population of Muslim Uighurs.
The memorandum indicates possible federal overrides in AI legislation passed by state or local governments.
"In certain circumstances, agencies can use their powers to address inconsistent, incriminating, and duplicating state laws that prevent the emergence of a national market," the memorandum said.
San Francisco banned police and other government agencies from using AI in 2019, but Hammond said the technology could be useful in certain public security scenarios.
"People may think there are privacy and abuse issues, but they are very abstract," said Hammond. "But then there is the moment when a child disappears and the moment when the child disappears. I want to be in control of every single camera in my city and have my child's face." [in the software] and I want to be able to find this child. "
Note: This story is updated with video.