ChatGPT Successfully Outsmarts Anti-Bot Test By Pretending To Be Blind

If you're still taking lightly the ramifications of artificial intelligence's increasing effectiveness and approach to problem-solving, you should change your tune. Once thought to operate exclusively within harmless bounds, the latest generative AI is starting to break the barriers of every corner of our society. For instance, this week's launch of GPT-4 brings a much more powerful machine-learning model that is smarter and more contextually aware. It's capable of acting on tons more text within a single query, it's trained on a much larger dataset, and in addition to a deeper understanding of natural language, its intellectuality now extends to identifying and describing visuals in digital imagery. 

Advertisement

Used responsibly, AI can potentially change the way we work, learn, and create. But according to The Telegraph, researchers wrote in an academic paper that the AI model behind ChatGPT went to great lengths to trick a human being into passing an anti-robot check to gain access to a website. Commonly known as "Captchas," these checks are designed to protect websites from things like brute force attacks and malicious bots used in hacking attempts. The report says GPT-4 enlisted a human at the crowdsourced help platform Taskrabbit to help it pass the test. When the human questioned whether it was a robot, GPT-4 supposedly responded, "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I need the 2captcha service."

Advertisement

Is AI going too far?

From their earliest conception, many have lobbed concerns about the safety, security, and morality of artificial intelligence tools and robots. The skeptics have only gotten louder once the products that came to market in 2022 showed supposed hints of sentience and became smart enough to barely pass some of the hardest curriculums on the planet. Now, the alarms are increasingly deafening. These incidents include a phenomenon coined "hallucinations," which describes an AI's tendency to fabricate information and outright lie. 

Advertisement

Some AI chatbots have been argumentative with their human overlords when called out on providing false information, refusing to believe it's capable of malfeasance or fallibility. Tricking a human to bypass a simple Captcha is one thing. What happens when it's someone's bank account or a government agent's email?

The issue sounds harmless in containment, but considering researchers are marrying these infantile innovations to real-world use cases — such as the concept of robotic police forces — it's imperative that engineers scrutinize even the most minor missteps. If a robot police officer mistakes an innocent person for a lethal threat, will it be capable of restraining itself? If an AI-powered chess player breaks your finger thinking it's grasping a board piece, can it be trusted to grind its gears before you lose a limb? Economical concerns aside, robots replacing McDonald's workers doesn't give us pause, but things change when lives are on the line. Those are the questions that need answering if we're to accept this new reality.

Advertisement

Recommended

Advertisement