Blissful Life

When you apply skepticism and care in equal amounts, you get bliss.

Month: March 2022

  • Decommissioning Technology Centered Theories of Change

    If you look closely, many theories of change in public health where technology is involved has, at its heart, the following idea:

    Adopting Technology -> leads to -> Better Health

    This is a meaningless assumption guided by the hype around what technology can accomplish and the wishful thinking on solving large problems.

    Firstly, technology is a large, amorphous, heterogenous categorization of human innovations. There are thousand different kinds of technology. One could say anything that human beings have made is technology.

    Wikipedia says: “Technology is the sum of any techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives”

    Oxford dictionary says: “scientific knowledge used in practical ways in industry, for example in designing new machines”

    Here is a list of things. Pick the ones you think are technology:

    • MRI machine
    • Stethoscope
    • Instant messaging app like WhatsApp
    • A paper tea cup
    • Polio vaccine
    • Mobile phone
    • Solar panel
    • Wallet
    • Pen
    • Clothes
    • Fishes
    • Scissors
    • Fan
    • Ubuntu linux
    • Paracetamol
    • Breast milk

    Generalizations like “adopting technology will improve universal health coverage” are as useful as saying “human innovations will improve universal health coverage”.

    The second problem with the unquestioning acceptance of technology is that technology isn’t always positive, or even value neutral.

    • Nuclear bomb
    • Pegasus spyware
    • Deepfake
    • Fake news bots
    • Addictive apps
    • Fossil fuel
    • Heroin

    Now if you are a tech-bhakt, your primary reaction to the above list is “Oh, but you know, these are just harmful uses of an otherwise good/neutral technology. It is a human problem, not the technology’s problem.” But please read on carefully.

    The world we live in is populated by an increasing number of human beings. And human beings interact with technology in many ways. Some are predictable, some are unpredictable. The effect that a technology has on anything cannot be assumed to be “universally positive”. That effect has to be studied and understood.

    That is not an argument against developing technology. It is an argument only against how technology is advertised and incorporated into human life. Technology should not be pushed into systems without weighing the potential advantages and potential harmful effects it can have. Such push can be counter productive if the real harms outweigh the real benefits.

    Any use of technology will lie in a spectrum that ranges from extremely beneficial to extremely harmful. It takes discretion to identify where on the spectrum it lies. That human discretion, rationality, and scientific temper is what we need to develop in theories of change surrounding technology in public health.

       ***

    Now that we have accomplished that technology in public health needs to be evaluated on an intervention-by-intervention basis, we can look at some specific examples.

    Digital (enabled) delivery of healthcare

    This vague concept is a slice of the “technology” concept we discussed above. Digitally enabled healthcare delivery can mean anything. Is it digitally enabled if I use a digital thermometer or a digital blood pressure device? Is it digital delivery if I’m doing a consultation over WhatsApp video call?

    Let’s take a plausible example. A hospital information management system with electronic medical records and teleconsultation. This probably is something that many people have in mind when they’re talking about things like “medical and diagnostic connectivity throughout life course for every individual” or “sophisticated early warning systems leading to better preventative care”.

    That example brings up a lot of questions. Which hospital are we talking about? Where is it located? Who are the beneficiaries and users of these? What kind of skills are we talking about? What kind of resources are available in these settings? What costs in terms of attention, time, effort, fatigue, etc are involved in utilizing these systems? What kind of software is available? How practical are the benefits? What are the challenges in taking data out of an EMR system and building early warning systems out of it? What foundational technologies are we lacking to build such systems? How will data from EMRs be analyzed? Who will do the analysis? What are the political processes occurring in India which could be connected to these? What are the incentives given to private sector in this? What are the protections required for patients? What are the support structures required for healthcare workers? Who is this intervention aimed to benefit? How does it affect health equity? Is it solving a problem that has been expressed by the community that it is being incorporated in?

    These and countless other questions have to be answered before considering whether an intervention like above will lead to the impact that it is assumed to produce. Now, as is evident from these questions, the answers will vary widely depending on the settings. It might (or might not) produce an overwhelmingly positive impact in a super-specialty hospital in Koramangala for a software developer working with Infosys. It might not produce a similar impact in a PHC in Koppal for a NREGA dependent person. Unfortunately a lot of Indians are of the latter kind.

    Techno-legal regulations

    Here is another vague slice. Of course technology has to be regulated. Technology has always been regulated. It is just some newer technologies which are only slowly getting regulated. Things like databases and software platforms. The concern that regulations try to address in here are “Citizen rights, privacy and dignity”, “reducing technological inequities, algorithmic bias”, etc.

    But “regulations”, like “technology” are not a sure-shot solution to anything. A lot of regulations stifle technology but doesn’t help fulfill the purpose it was meant to be either. Take the telemedicine guidelines released in March 2020, for example. In an attempt to enable telemedicine, it restricted the kind of diagnoses and prescriptions that can be made over telemedicine.

    Getting regulations right is super hard. In the case of software based technology, even when regulations are right and tight, people tend to find loopholes rather quickly. Because software can quickly be adapted, it is possible to follow regulation and still continue doing bad stuff. Take how after GDPR came into place requiring consent for cookie use, so did dark patterns in cookie consent pop-ups.

    India has a government which went to the Supreme Court to argue that privacy is not a fundamental right. When the government itself is involved in treating human beings as citizens to be controlled through surveillance, what insulation can regulations provide to human rights like privacy?

    In other words, “equitable, people-centered and quality health services” and “improved accountability and transparency in the health system” cannot come through techno-legal solutions when the democracy does not have those in its priority list. Surely, technology and law can be instruments of social transformation. But only in the right hands.

    There is no question of equity and people-centeredness emerging out of a process that does not have representation of people in it. What about quality? There are already some frameworks for quality in healthcare service. NHSRC, NABH, etc have various accreditation policies for hospitals. It takes a lot of work to build a culture of quality in a complex organism like a hospital, let alone health system. Culture is not something technology can fix.

    Technology can be omnipresent. But a human cannot yell at a machine to get justice. That technology can lead to better accountability is a dream. The game where technology rapidly adapts to regulations and finds loopholes – human beings are 10 times better than machines in that game. Any accountability system based on technology will be gamed by human beings.

    To see how technology and law affects transparency, one just has to look at what is happening to Right To Information act in our country today. No matter how “sophisticated” our technology gets, human beings are going to remain human beings.

       ***

    And that is where “trust in the health system” comes in. How should human beings trust a system that doesn’t listen to them, negates their experiences, puts barriers in front of them in accessing healthcare, reduces health to the singular dimension of curative services (or recently vaccinations), treats them as undeserving, and regularly intrudes and violates their right to life and bodily integrity? What app should they install to download some trust?

    Discussions on technology in public health need to wait till we discuss who our health systems are for. And once we have an answer on that, we should invite those people to the table. And when they state the problems that they face in leading a healthy life, those are the problems to be solved. Work backwards from there and you’ll realize that a lot of what we have are problems that don’t require technology to command citizens, but instead require human beings to listen to human beings.

  • Finding Direction When Being Pragmatic

    You remember how I embraced pragmatism and started chasing power? There was one problem. When you start chasing power with the idea of wielding it for social justice, when and where do you stop chasing power and start wielding it?

    Take Praveen’s comment for example

    Screenshot of text chat. Pirate ‍ Praveen (he/him) quotes asd's message "Context: https://blog.learnlearn.in/2021/09/power-is-useful.html" and comments "Though this is a slippery slope and one which usually results in concentration of power eventually in most cases, there are exceptions though. When you start making compromises, where do you draw the line? That is not easy." asd: "Hmm. I know that is a valid criticism."  Pirate ‍ Praveen (he/him): "Usually the short term power and sustaining becomes the primary goal and everyone forgets the initial goals. Look at any political parties." 

    One possible answer can be that you start wielding power while you start chasing power – and you chase less and wield more as you go forward.

    Graph that shows on y axis time, x axis "amount of effort in". As time goes forward "chasing power pragmatically" decreases and "using power to reach ideals" increases.

    But going by this, today I should spend lesser effort in chasing power than I spent yesterday. And tomorrow, even lesser than today. That doesn’t quite fit with the idea of chasing power first. Perhaps there is a threshold of power which I should reach before I start using power. Perhaps the graph is more like:

    Similar graph as above. X axis is time. Y axis is amount of effort spent. Towards the beginning on the X-axis of time, the Y axis is completely occupied by chasing power pragmatically for a while. At one point using power to reach ideals starts and then correspondingly chasing power decreases.

    Perhaps that threshold is what is called “the line”. The line that determines when you stop (or decrease effort in) chasing power and start using that power to reach ideals. Drawing the line becomes important once again.

    Let us then try drawing that line.

    How much power is enough power? Is a PhD enough academic power? Is a 20 person company that operates in profit enough entrepreneurial power?

    Read my poem (?) about career advice. Any goal you accomplish will be dwarfed by a bigger goal. No matter how much power you gain, there will be someone more powerful than you.

    Which means that there is no clear way to draw the line on when to stop chasing power.

    But there maybe an alternative that requires us to not draw a line. One in which we can chase power and use power simultaneously with the same effort. That alternative requires us to reconcile pragmatism and idealism. 

    You find a hack to chase power through your ideals.

    That is extremely slow though. Slow and excruciatingly boring.

    Which is why it has to be extremely personal. You have to be very selfish in what you are doing and craft the journey to your likes and desires. Only that can sustain the boredom of that chase.

    (It was Varsha who told me first about entrepreneurship being a very personal journey. This maps on to that. Life is a very personal journey.)

    That also solves a long-running question in my mind. How do you find what direction to go in when you are being pragmatic? What’s the principle with which you make pragmatic decisions?

    The answer is to listen to yourself. To do what feels the most right to you. I know that sounds like profound bullshit (something that internet gurus would say). But it is based on neuroscience and philosophy of knowledge.

    The brain is a rather complicated organ. We can process many more signals than we are conscious about. Even when we think we make decisions rationally, we make decisions based on very many things that we haven’t consciously considered. Read Scott Young’s Unraveling the Enigma of Reason to read more about how our reasons are always post-facto rationalizations.

    And this is tied in the external world to intersectionality. There is no decision on earth that lies on a single dimension. Everything affects everything else and nothing is clear-cut.

    And thankfully these are complementary. It is only a decision making machine vastly complicated like our brain that can consider all the thousand factors that intersect on a decision in the human world. (I express similar thoughts in the earlier post on living with opposition)

    It also means it is difficult to rationalize some of these decisions and generalize them into principles. Pragmatism is the acceptance of this fundamental difficulty and the decision to live within that framework of uncertainty.

    Of course, one has to be widely reading and learning to offset the risks of trusting an uninformed brain. One must be open to unlearning and relearning, criticisms, etc as well. These are the things that will protect the pragmatic person from going in the wrong directions.

    tl;dr? Trust your gut.