Microsoft Is Talking About The AI Risks Everyone Would Rather Ignore
Microsoft's most recent live event went into great detail on the future of AI on the company's products. As covered by The Verge's liveblog of the event (for unknown reasons, Microsoft elected not to livestream) the company displayed ways AI tools in Windows and Bing could improve everything from home cooking to web design. Unlike many other companies, however, Microsoft also addressed the potentially negative impacts of AI as a widely available tool.
Concerns about AI's outcome are widespread among technically savvy users. With even experts suggesting that true self-aware machine intelligence could exist within the next generation or two, people are understandably concerned about AI's potential for destruction, both as a tool and as a self-directed agency.
As one of the world's foremost software companies, Microsoft is necessarily at the forefront of these concerns. Based on the rollout event and subsequent press release for its new AI solutions, Microsoft has taken those concerns to heart and actually made an attempt to answer user concerns about the potential dangers of this important innovation in consumer tech.
Damage control in the digital-enabled future
In terms of AI's potential dangers, we quote Nilay Patel of The Verge: "Microsoft is having to carefully explain how its new search engine will be prevented from helping to plan school shootings."
That statement sums up an important part of the potential dark side of AI. To the extent anyone has addressed the dangers of AI, they've tended to focus on the slightly science fiction idea of an algorithm upgrading itself past human control and becoming an independent, potentially malevolent entity. Less exciting but more probable, AI could also become an indispensable element of the worst things humans do to each other. Most experts expect AI to become part of everyday life for the digital-enabled world. If that's the case, it will need ethical safeguards to keep it from aiding in crime, terrorism, and similar malfeasance.
Per Microsoft's Sarah Bird, the company's new AI tools have those safeguards built in. The company took on the "sentient malevolent AI" concern first, building its AI solutions on a "copilot" basis requiring human interaction at every step. As for bad actors using AI as a tool, The Verge says that Microsoft will "continuously test conversations and analyze them to classify conversations and improve the gaps in the safety system." That model of constantly updating safety tools will also take place on the user's side; according to Bird, "we have gone further than we ever have before to develop approaches to measurement to risk mitigation," incorporating a constant review of search engine prompts to guarantee user control of AI's riskier features.