5 Ways AI Wants To Prove You Can Trust It (And 5 Reasons You Shouldn't)
Whether we like it or not, it's looking like AI is here to stay. There are a lot of ways AI has helped out in our everyday life, and ChatGPT is a fun tool to play around with. However, there are some warning signs out there that indicate we're not quite ready for AI to become as prevalent as some people might want it to be.
With many big companies all getting into the AI field, there's a lot of concern about what the future holds for AI. "The Terminator" director James Cameron has voiced his concerns about the growth of AI by saying he warned us back in the '80s with his fictional corporation Skynet. As it stands, AI has a fair share of believers and non-believers, so it's tough to say for certain how things will shake out. For every perceived positive gain that AI brings, it seems like there's also a negative.
AI isn't always right
When ChatGPT exploded in popularity, many people used it to answer their questions. On the surface, it seemed like it was a very knowledgeable tool, but it had its drawbacks. For starters, its knowledge only went up to 2021, so it couldn't accurately answer recent questions you may have. There's also the issue of asking ChatGPT something, getting a lengthy response, and then finding out a lot of the info is wrong.
When it comes to general knowledge, ChatGPT is a strong tool, and it can likely be trusted in that respect. The drawback so far is ChatGPT doesn't act as a search engine but instead only knows what it is taught. You'll find out that errors can pop up when you ask about more niche topics that you are an expert in. As it continues to be fine-tuned, we'll likely look at it more fondly than we do now.
It's tough to tell what's AI and what isn't
When Marvel's "Secret Invasion" was released, there was an uproar over the use of an AI-generated title sequence. There's a lot of concern that things like that could continue into the future, and people could have their livelihoods taken away. Another issue is it's tough to tell whether something has a human touch to it or if it's totally AI-generated. While using AI can be a helpful tool in creating art, it's nice to know if AI has a hand in something.
The Biden-Harris Administration rolled out a list of initiatives to earn the public's trust with AI, and one thing revealed was the use of a watermarking system to let people know if something was created with AI. While that doesn't exactly squelch the fears of losing your job to a machine, it does let the general audience know if something is AI-generated.
Safety is a concern
Tesla vehicles feature self-driving technology, but it's not quite perfect. In June 2023, an analysis from the Washington Post revealed there have been 736 crashes since 2019, including 17 deaths. Tesla has attempted to downplay the issue by saying a human should ultimately still be in control of the vehicle.
"NHTSA has an active investigation into Tesla Autopilot, including Full Self-Driving," Tesla spokesperson Veronica Morales told the Washington Post. "NHTSA reminds the public that all advanced driver-assistance systems require the human driver to be in control and fully engaged in the driving task at all times. Accordingly, all state laws hold the human driver responsible for the operation of their vehicles."
With autopilot coming standard on every new Tesla, it's important the company improves its safety features. Tesla warns there are drawbacks that include poor visibility, bright lights, and winding roads which could all lead to accidents. The cars do come equipped with a wide variety of safety features like lane departure assist and automatic emergency braking, but there's still room for improvement. The future of self-driving cars sounds cool, but we're not quite there yet.
Privacy remains an issue
The use of your own personal data has been used to tailor specific ads and social media posts to you for quite a while, but the issue could grow to be even worse with AI. In order for AI to work to its fullest extent, it requires a lot of data from you, and that could prove to be an issue. Data leaks happen often, and there's a good chance you've received an email at some point warning you that your data was part of a breach.
In an effort to combat this, the White House has announced they have received voluntary commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to invest in cybersecurity and insider threat safeguards to ensure this won't happen. If there is an issue, they want it to be fixed quickly before harm is done. With these being the biggest players in AI so far, it does help ease some concerns about the future — but not all of them.
AI can help cure many problems
A positive spin for AI is its ability to help out in plenty of looming threats in today's society. The White House notes it can help in anything from cancer prevention to climate change, so it's clear AI does bring some good into the world. It can be used to predict future carbon emissions while also cooking up some new ideas to help combat it. When viewed from this lens, it's tough to find a fault with AI.
Of course, the same issues do arise that were brought up earlier in terms of reliability. AI has been known to spit out an answer, confidently no less, that is completely wrong. If it's taken at face value, then it could lead scientists down the wrong path. Luckily, AI is improving on a day-to-day basis, so we might not be as far away from it playing an important role in climate change.