The article is contributed by Biswajit Das, Founder - Brandintelle Services
Businesses & governments cannot do without AI/ML. Yet if we take it too seriously we may be misled, sometimes with very serious effects. But the reality is that in future, both governments & corporations may have to appoint AI/ML ombudsmen to check & control AI/ML algorithms.
We have all faced occasional inquiries from our respective Direct & Indirect Tax Departments pertaining to a past period - which suggest that we have evaded taxes. The notice further asks us to prove that we are not evaders by furnishing documents that we had already furnished years ago. These are the results of AI/ML algorithms which are run by Tax Departments on the vast amounts of data that they have aggregated. In most cases, they serve as irritants, forcing citizens to submit the same documents multiple times at their own time & cost.
Apart from the stress that this generates on semi-literate or otherwise challenged taxpayers, there is the odd case where the allegation by the algorithm can be of a more serious nature - without any justification. Repercussions of such AI/ML “activism” can be far more serious in-patients medical records as the following case shows.
In 2021, Wired magazine reported a case of a patient who was suffering from a chronic condition which caused her great pain. Although the situation was not critical, the medicines were clearly not working. And surgery was very risky. To alleviate her pain, her physician naturally regularly prescribed opiates as painkillers. for multiple years.
After many years, her physician recommended a hospital visit - for a second opinion as well as a possible alternate treatment. The patient visited & was asked to register as an in-patient for some examinations. She was accommodated & tests were conducted. Meanwhile, she ran out of her opiate painkillers & asked for a fresh prescription. This was a normal request as far as the patient was concerned - but it resulted in a deadlock situation. The hospital did not prescribe the medicine & nor did it give any reason. As soon as the tests were completed, they requested her to leave the hospital as they could not treat her any longer. Upset & perplexed, the patient went home & asked for an appointment with her regular physician. But she was in for a rude shock! Her physician refused to treat her. After much pleading, she was able to extract the reason - her identity was “blacklisted” as a substance abuser and on the basis of this, the medical profession was practically asked to boycott her. It took her many months to reverse the situation.
What had happened behind the scenes just before this period is interesting. The hospital had signed up to outsource its Hospital Management System along with Electronic Medical Records to a third party along with a group of regional hospitals & regional physicians. The third party managed to aggregate patient medical records from all the hospitals, & physicians into a data lake. And as is the custom today, built some “intelligence” into its Hospital Management System with AI/ML. The algorithm that was published had an inbuilt logic that apparently categorised all prescription opiate users as “substance abusers” if they used opiates for more than a certain (undisclosed) period! The Hospital Management software automatically “blacklisted” such patients & barred them from any medical treatment!
Current & future generations may rely on AI/ML as the ultimate decision-making tool. But in reality, AI/ML simply points to signs, trends & predictions all based on applied statistics (from the eighties & earlier!) which are applied to available historical data. This was not possible till the price-availability of computing & storage power crossed a threshold level - which happened in the last 5 years.
But here’s the thing: both data & algorithms are prepared by human beings. Therefore both are imperfect. And to make it worse, both data & algorithms are ‘opaque’ to end-users, making some victims & others perpetrators!
Yet we tend to treat the predictions & pointers as though they were the Gospel truth!
While these nudges & prediction techniques work in general, it can be fatally wrong at times. As the adoption of AI/ML proliferates, there will be a need to appoint regulators & ombudsmen who will have the right to examine & override decisions taken based on AI/ML algorithms.