Monash Pediatric Dosing App Dev Diary I : Ripping Up The Brief, & Diving Deep.

Darren Rajit
7 min readOct 10, 2020

Over the past year, I’ve been working with a few clinicians over at Monash Health on a project to explore ways on how to reduce misdosage risks in pediatric emergencies.

We’re in very early beta on Android at the moment and are looking for testers to shape development (Download here and have a play on a very very early MVP!), but I thought I’d start documenting the journey that has led to this, to consolidate what I’ve learnt and also to form a basis for a project roadmap into the future.

Context

Back in mid 2019, Prof. Simon Craig, a pediatric emergency physician at Monash Health reached out to us (Monash Young MedTech Innovators) to develop a companion app for the Monash Pediatric Emergency Medication Book. The book itself is designed for use in clinical settings such as on a resuscitation trolley and is a weight based guide for caregivers to be able to administer medication during pediatric emergencies.

You’ll find these trolleys laden with medications, equipment and everything in between, and as I would find out later in my research, wheeled around to respond to MET calls and other emergencies within hospitals.

What Problem Are We Trying To Solve?

In absence of an aid, the dosing of children in emergency situations with the current status quo may involve mental calculations in high pressure environments. This increases stress levels, and increases the risk of misdosage due to human error. Indeed, some of what triggered this project in the first place was a dosage error a few years ago that lead to the deterioration of a critically ill child.

An opportunity also exists in that protocols for treatment of children across Victoria aren’t completely standardized. A technological solution to aid clinicians may increase uptake of recommendations, increasing standard of care. As I would find out, the science of dosing is more of an art, and “what has worked for me in the past” tends to be the main way how these decisions are made.

Early artefact from stakeholder presentations, denoting research outcomes and creating information architecture

The most interesting component of this is that the margins for error are both small and/or large depending on who you are. Small if you’re inexperienced, but large enough in the eyes of specialist such that nuances in formulas may be tweaked depending on precisely the outcome you’re looking to elicit in the patient.

There isn’t a baseline way to do this yet within Victoria, Australia, so therein lies the opportunity (and assumption) for innovation from a clinical guideline perspective.

Ripping Up The Brief

With that assumption in mind, I took the project on, and thankfully have been employed on a casual contract in the mean time (full disclosure). I have a background in Biomedical Science, Engineering and Service Design, but the main tool and framework that I have been using is the double diamond model from the UK Design Council.

Do note that frameworks and tools are nice as aids, but knowing the why and how these frameworks are used are important in knowing when they’re suitable. Having a sense on when to bend the rules is still something I’m working on, but I personally try not to be dogmatic about the methodologies I use. Use what works, and let the results and insights speak for themselves.

The double diamond. Credit : https://medium.com/digital-experience-design/how-to-apply-a-design-thinking-hcd-ux-or-any-creative-process-from-scratch-b8786efbf812

The first step is usually to ‘rip the brief’. I usually approach this by formulating a problem statement or a “How Might We” question, then systematically tearing it apart. I question who my population is, what the problem is, what the outcomes are, and what assumptions I have made. How is my question structured? Have I codified my own biases somehow? How do I test these assumptions, and importantly, where to from here?

The structure of HMW questions is something I’m quite a fan of. They’re short, snappy provocations that make it easy to break down exactly what I’m trying to interrogate. Short is good in that it feels throwaway. It’s less easy to get attached to the question. It also frames ideation around the possibilities and implies via “we”, that this is an invitation to explore, in concert with multiple stakeholders.

As above,I started with the core assumption that I was here to “aid care teams”. I broke this down further and asked myself:

What do I mean by aid? What sort of aid are we talking about? What does it look like? Who are these care teams that I’m speaking about? What do we mean by ‘team’? How are these teams structured? Why are these teams structured this way? What are their habits, their jobs, their perspective of the problem? Do they care? What do they fundamentally care about?

You’ll note that I’ve made a few assumptions here. I assume that the problem is something faced by a nebulous “care team”. A term I had picked up after my kickoff meeting with the clinicians I was collaborating with. I assumed that these care teams had a stake in, or cared about, my problem (dosing of pediatric emergencies) and my outcome (lowered misdosage risks).

Often enough, in situations like these, the best route to clarity is to simply just ask. And so I did. I broke down my population to 3 core buckets : clinicians, nursing and ambulatory staff, framed my assumptions into questions and started doing user interviews and contextual inquiries.

I had a few core topics I wanted to touch on:

  • Their experiences and feelings in handling pediatric emergencies
  • Their perspectives on digital solutions
  • The opinions on the status quo and the usage of clinical aids as it stands.

I suspected that all 3 buckets would have differing perspectives of the problems, and that my 3 buckets would not be sufficient to understand the problem in all its glory, much like the parable of the elephant and the blind men.

The parable of the elephant and the blind men. Often enough, the people we choose to interview come with their own suite of perspectives, biases and world views. Being acutely conscious of this is important in understand the actual problem we’re trying to solve.

I’d later learn that my populations were far more nuanced than I expected, and my early buckets were far too reductionist. However, the best thing to do at times like these is to make a quick first pass based on your world views, conscious of your biases, and get out of the building ASAP.

Learning the Lingo & Earning The Right to Explore

Often at this stage, we start reaching an impasse. Finding folks to interview can be hard, particularly in trying to understand healthcare systems. I was lucky enough to convince clinical collaborators on the need for more qualitiative data, framed around stakeholder consultation. Being able to tap into their networks to obtain my first cohort of focus groups were paramount.

I also had instances where folks would thank me for approaching or consulting them in the first place before building anything. You’d think that this was common practice, but judging from frustrations with non-functional hospital IT systems that folks very gladly expressed to me, it seems like a big, fat, “no.”

It also helped that I could throw around my mantle as a university student. Perspectives and per-conceived biases shifted and I was able to enter places where other titles might have not been able to enter, in the name of “learning” .

However earning the right for exploration doesn’t stop there. Part of this was learning the lingo of the context I was entering. From prior experiences, I know that the main paradigm for clinical care currently, is a focus on evidence-based medicine. Interestingly, alongside evidence (systematic reviews, and RCTs), sits a great respect for authority and name recognition.

Thus, a substantial portion of my time was getting a hold of the lingo used in academia and within clinical care, specifically within pediatrics. Medicine loves its acronyms, and being prepared for this can be as easy as reading the literature and getting abreast of what the latest pet acronym is.

Interestingly, while the issues spoken about in literature are the ones that open the door for getting in front of potential users; the issues you uncover as you conduct cultural probes and interviews tend to be slightly different. Important distinction and something that came to the fore very quickly.

Conclusion

I’m ending this diary entry here. I hope it gives a picture of how we can approach nebulous problems in some sort of structured manner.

Artefacts from presenting my work to my collaborators, and to obtain buy in, for permission to explore their contexts.

Next week I’ll be summarizing how I went about my research and what my findings were, as I shadowed and interviewed nurses and clinicians in their habitats. I’ll also focus on how I framed interviews to be able to obtain the right to explore.

By the end of this Phase, a lot of my assumptions and those of my clinical collaborators were busted, but the whole point of this is to lend richness, and nuance to our data. After all, we’re looking to build something that people need and want.

--

--

Darren Rajit

Co-Founder @ MYMI | Passionately curious about design, technology and healthcare.