- User-friendly LLMs
- Posts
- Inattention Valley: The Relationship Between Accuracy and Supervision
Inattention Valley: The Relationship Between Accuracy and Supervision
The interplay between the accuracy of artificial intelligence and the human need for vigilance emerges as a critical challenge, with implications for safety and decision-making.
The allure of automation captures the collective imagination in each passing AI summer. It promises efficiency and convenience; the media buzzes with stories of companies eagerly seeking ways to boost human workers with the power of artificial intelligence. Users find themselves thrilled by the dream of a life unburdened by the need for human intervention.
The better AI gets, the more tasks it can take over. However, there is a limit to how good it can become. Let’s talk about it in terms of accuracy: Out of 100 decisions, how many correct decisions does the AI make? In practice, anything between 60 and 90 is realistic. Below 60, your system is garbage, you could also just do a coinflip and decide based on that. Above 90 is pretty unrealistic, nearly no AI can and should reach that level of accuracy.
The more accurate it is, the more tasks we can delegate to the AI, the less attention we have to pay to it. That’s the simple equation. But it’s more complex than that.
Let’s imagine we are developing an AI for a car. How will the accuracy of the AI impact the amount of driving we can delegate to it? The more accurate the AI is, the better it drives, the more I am willing to give up as the driver. I might give it control of the wheel, the speed, whenever I don’t feel like driving. The less accurate it is, the less I am willing to give up or the more vigilant I am when it is taking the wheel. I am staying in control of the care.
The big question is: Am I paying attention when AI has proven itself as a good driver over time?
Studies suggest no: Automation can lead to dangerous complacency. Humans place unwarranted trust in the capabilities of AI. Humans lapse into a mode where they start to multitask or direct their attention to something else entirely.
The Perils of Automation
When we lay back, we become disengaged from the monitoring process. The more automation proves its accuracy, the more disengaged we become from the monitoring process. Initially attentive, we slip into complacency.
This is the inattention valley. If a self-driving car is 50% accurate, you would always be alert and ready to take over. But over 90% accuracy is a problem because the driver is not ready when it’s time to take over.
When we want to give over the tasks to the AI, we have to figure out how to keep the user engaged or how to get him engaged fast when the system says ‘you have to take over and I really mean it.”
Bring Back Feedback
Because we hand over the task to the AI, automation diminishes or completely eliminates the feedback that’s usually given to the user. They are left inadequately prepared to detect and respond to failures. They cannot remain vigilant without knowing what is going on. The transition from automatic to manual control becomes far more challenging. The user might be caught off guard that he has to step in, is not informed of what is going on at the moment, or what action to take next.
Keep the system status transparent and give immediate feedback on the important actions the system is doing.
Communicate the limitations of the system: If the AI is uncertain about what action to take, communicate it to the user. Give a notification that he should stay vigilant before executing the action. Provide a color-coded status indicator that changes from green (high accuracy) to yellow (moderate accuracy) to red (low accuracy) based on the system's confidence in its action.
Clearly communicate the limitations of the system. Include a text message or icon that explicitly communicates the system's limitations and prompts the user to remain alert and prepared for intervention.
Keep the contextual information on the user interface. For autonomous driving, this could be road conditions, traffic, or environmental factors so that the user can understand the rationale behind the system’s action and have confidence in its actions. You can also offer explanations for specific actions and decisions.
We need to give enough feedback to keep the user informed and give him the relevant information in the second he has to step in, so he can take over operations immediately.
Reply