Categories Filter
  • 12/04/2019

    Post Link

    CP Grey quote

    The real world doesn’t care what you are bad at, it only cares what you are good at.

    CP Grey.

  • 11/04/2019

    Post Link

    Late night thoughts #4

    Late night thoughts on medical education #4: Maps and scheming over schemas

    One of the problems in learning clinical medicine is the relation between an overall schema of what you have to learn and the detail of the various components that make up the schema. I can remember very early in my first clinical year, seeing a child with Crohn’s disease, and subsequently trying to read a little about this disorder. My difficulty was that much of what I read, contrasted Crohn’s with various other conditions — ulcerative colitis, Coeliac and so on. The problem was that I didn’t know much about these conditions either. Where was I too start? A wood and the trees, issue.

    I have, pace Borges written about maps and learning before. This is my current riff on that theme. I am going to use learning how to find your way around Edinburgh as my example. There is a simple map here.

    That fine city

    The centre of Edinburgh is laid out west to east, with three key roads north of the railway station. You can imagine a simple line map — like a London underground map — with three parallel main roads: Prince’s street, George Street and Queen street. You can then add in a greater level of detail, and some arterial routes in and out of the city centre.

    If you were visiting Edinburgh for the first time, you could use this simple schema to try and locate places of interest. If you were lost and asked for help, it night prove useful. You could of course remember this simple plan — which is the most northerly of these three streets and so on — or perhaps use a simple cognitive prosthesis such as a paper map.

    Students learn lots of these maps when they study medicine, because they are asked to find their way around lots of cities. They also forget many of them. The more complete the map, the harder it is to recall. If they have to navigate the same terrain most days, their recall is better. No surprises there. If you challenge a student you can literally see them reproducing the ‘map tool’ as they try and answer your question. Just like if you ask them the causes of erythema nodosum, you can literally see them counting their list on their fingers.

    Novices versus experts

    There are obvious differences between novices and experts. Experts don’t know need to recall the maps for multiple cities, instead they reside in the city of their specialty. Experts also tend not be good at recalling long lists of the causes of erythema nodosum, rather they just seem to recall a few that are relevant in any particular context. The map mataphor provides clues to this process.

    If you challenge experts they can redraw the simple line figure that I started this piece with. They can reproduce it, although as the area of coverage is increased I suspect their map may begin to break the rules of 2D geometry: they move through the city professionally, but they are not professional cartographers.

    The reason for this failure is that experts do not see the ‘line map’ in the mind’s eye, but actually see the images of the real geography in their mind as they move through it. They can deduce the simple line graph, but this is not what they use diagnostically to find their way around. By contrast, they see the images of the roads and building and can navigate based on those images. They have their own simulation, that they can usually navigate without effort. Of course, when they first visited Edinburgh, they too probably crammed a simple line graph, but as they spent time in the city, this simple cognitive tool, was replaced by experience.

    This sort of way of thinking was AFAIK first highlighted by the US philosophers Dreyfus and Dreyfus. They pointed out novices use ‘rule based’ formal structures, whereas experts did not. This is obvious in craft based perceptual subjects such as dermatology (or radiology or histopathology). Experts don’t use check list to diagnose basal cell carcinomas or melanoma, they just compare what they see with a personal library of exemplars. The cognitive basis for this ability, taking advantage of the idea of ‘familial likeness’, has been studied for a long time, although I do not think the problem is solved in any sort of formal way. It is usually very fast — too fast for the explicit scoring methods promoted by most clinicians and educators.

    Although this way of thinking is easiest to appreciate in perceptual subjects such as dermatology, most clinicians do not view things this way — even when the experimental evidence is compelling. Some believe the explicit rules they use to teach students, are how they do it themselves. Others believe that experts are fluent in some high level reasoning that students to not possess . They like to think that their exams can test this higher level ‘deep’ reasoning. I think they may be mistaken.

    Finding the takeaway

    There are some ideas that follow from my story.

    1. Without wishing to open up the delusion that factual recall is not critical to expertise, experts and novices do not possess the same methodology for working out what is going on. This means that we might promote simple structures that are placeholders for expert knowledge that will come through experience. These placeholders are temporary and meant to be replaced. We should be very careful about making them play a central role in assessment. To me this is akin to the way that some written Asian languages have different systems for children and adults.
    2. Some of these placeholders might need to be learned, but some can be external cognitive prostheses, such as a paper map or a BNF.
    3. Having to memorise lots of simple line-maps for lots of different cities imposes a heavy cognitive load on students. Long term memorisation of meaningful concepts works best when you don’t know you are trying to memorise things, but rather, you were trying to understand things. Our students are all too often held hostage by getting on by ‘reproducing’ concepts rather than understanding things.
    4. Becoming expert means minimising the distance between rote learning of line-maps and building up your library of exemplars. Distance here refers to time. In other words, the purpose of prior learning is to give you the ability to try and navigate around the city so that you can start the ‘real’ learning. Some cities are safer than others — especially if you might get lost. Better to start in Edinburgh than Jo’burg (the ITU is not the place to be a novice).
    5. If you look at the process of moving from being a student to acquiring high professional domain expertise (as a registrar), it would seem better to focus on a limited number of cities. What we should not do is to expect students to be at home in lots of different places. Better to find you feet, and then when they get itchy, move on.
  • 10/04/2019

    Post Link

    University teaching awards

    [University] teaching awards seemed to have been added like sticking plasters to organisations whose values lay elsewhere.

    Graham Gibbs, Item Number 41, 2016, SEDA

  • 09/04/2019

    Post Link

    We have no doctors (again)

    We have no  incentives doctors.

    Shortage of GPs will never end, health experts say | Society | The Guardian

    OK, maybe the subeditor is to blame, but spare me the cartel of health think tanks and their pamphlets. Enticing people into general practice and keeping them there is not rocket science. When I was a junior doctor getting onto the best GP schemes around Newcastle was harder than getting the ‘professorial house-jobs’. Many people like, and want to be, GPs. If general practice is dying , it is in large part because the NHS is killing real general practice.

    A few years back I wrote a personal view in the BMJ, arguing that an alternative model for dermatology in the UK would be to use office dermatologists, as in most of the first world. It is likely cheaper and capable of providing better care as long as you consider skin disease worthy of treatment. The feedback was not good or in some instances, even polite. The more considered views were that my suggestion was simply not possible: how would we train these people? Well jump on a ferry or book Ryanair, and look how the rest of Europe does it.

    There are some general discussion points:

    1. The various NHS’s in the UK do many things very badly. The comparison is all too often with west of Shannon, rather than that body of land closer to us.
    2. The proportion of ‘health staff’ who are doctors has been dropping for over a century. This trend will — and should —continue.
    3. I write from Scotland: Adam Smith worked out the essential role of specialisation in economic efficiency many centuries ago. Conceptually, little has changed since, except the cost of health care.
    4. The limit on my third point is transaction costs of movement between specialised agents. This is akin to Ronald Coase and the theory of the firm: why do we outsource and when do firms outsource? How do we create — to use a software phrase — the right APIs
    5. Accreditation and a professional registration are there to protect the public. We will only encourage staff to take on the new roles needed  if  there is a return on their personal investment, in return for formal admission to the appropriate guilds. These qualifications need to be widely recognised and transferable, and the guilds will need to be UK wide (or, in the longer term, wider still).
    6. The current system of accreditation for those providing clinical care is bizarre. Imagine, you know a bright and ambitious teenager. You tell her to come and sit in your dermatology clinic for 5 years and, at the end, you employ her in your practice as a dermatologist — initially under your supervision. Well, we know that is not a sensible way to train doctors, but this is indeed the way the NHS is going about training those who will provide much face to face clinical care. Got a skin rash — see the nurse! (for a couple of personal anecdotes,  see below).
    7. The current system of accreditation for a particularly role for doctors is based around individual registration (with the General Medical Council). What the public require is however evidence of registration for defined roles and procedures (using the term procedure in a broad sense, and not just as in a ‘surgical procedure’). If somebody is a dental hygienist they are registered with the General Dental Council. This makes sense. The sleight of hand in medicine is that individual hospitals or practices have taken on the role of accreditation. I suspect if private individuals — rather than the NHS or its proxies — did this, they would be considered to be riding roughshod over the Medical Act (I am no lawyer…).
    8. Accreditation of  medical competence at the organisation level is indeed a possible alternative to individual personal registration. It might even have advantages. But this has not been the norm in the UK (or anywhere else), and the systems to do this are not in place.

    Two personal examples:

    I received an orthopaedic operation under a GA at a major teaching hospital. I was in the my mid 50’s, and previously fit. At the clerking / pre-op assessment by a nurse, my pulse and BP were recorded, and my urine was tested. I was asked : “Are your heart sounds normal and do you have any heart murmurs?” (There was no physical examination). My quip — that how could you trust a dermatologist on such matters — was met with a total lack of recognition. I recounted the story to the anaesthetist as a line was inserted in my arm. I also mentioned, for effect, that they didn’t ask about my dextrocardia….( I achieved the appropriate response — to this untruth). Subsequent conversations with anaesthetists confirmed that their opinions were in keeping with mine, and this “was management” and ‘new innovative ways of killing working’.

    As a second year medical student, with a strong atopic background (skin, lungs, hay fever etc). I came out in what I now know to be widespread urticaria with angioedema. On going to the university health centre, the receptionist triaged me to the nurse, because it was ‘only skin’. I didn’t receive a diagnosis, just an admonition that this was likely due to not washing enough (which may have been incidentally true or false…). A more senior medical student provided me with the right diagnosis over lunch.

    The latter example chimed with me, because  DR Laurence in his eclectic student textbook of Clinical Pharmacology lampooned the idea that nurses had ‘innate’ understandings of GI pharmacology, a delusion that remained widespread through my early medical career. Now, sadly, similar prescientific reasoning underpins much UK dermatology. The public are not well served.

  • 08/04/2019

    Post Link

    What universities are about

    James Williams worked at Google in a senior role for ten years, but has moved into philosophy at Oxford (for the money obviously….). He has written a wonderful short book, with the title “Stand out of our Light”. The name comes from a humorous account of a meeting between Diogenes and Alexander the Great (no spoilers, here).

    His book is a critique of much digital technology that — to use his analogy — does not act as an honest GPS, but instead entices you along paths that make your journay longer. All in the name of capturing your attention, such that you are deflected from your intentions.

    He starts chapter 3, with something comical and at the same time profound.

    When I told my mother I was moving to the other side of the planet to study technology ethics at a school that’s almost three times as old as my country, she asked, “Why would you go somewhere so old to study something so new? In a way the question contained its own answer.

    For me that is the power of the academic ideal.

  • 05/04/2019

    Post Link

    Late night thoughts #3

    Late night thoughts on medical education #3: Touching the void

    Clayton Christensen gets mixed press: he cannot be accused of not pushing his ideas on ‘disruption’ to — well — disruption. So, his long history of predicting how a large number of universities will be bankrupt in a few years due to ‘innovation’ and ‘digital disruption’ I take with a pinch of salt (except I would add: an awful lot should be bankrupt). But I am glad I have read what he writes, and what he says in the following excepts from an interview makes sense to me:

    Fortunately, Christensen says that there is one thing that online education will not be able to replace. In his research, he found that most of the successful alumni who gave generous donations to their alma maters did so because a specific professor or coach inspired them.

    Among all of these donors, “Their connection wasn’t their discipline, it wasn’t even the college,” says Christensen. “It was an individual member of the faculty who had changed their lives.”

    “Maybe the most important thing that we add value to our students is the ability to change their lives,” he explained. “It’s not clear that that can be disrupted.”

    Half of US colleges will be bankrupt in 10 to 15 years.

    We know several factors that are dramatically important in promoting learning in university students: the correct sort of feedback, and students who understand what feedback is about (and hence can use it); and close contact. Implicit in the latter is that there is continued contact with full time staff. When stated like this it is easy to understand why the student experience and faculty guided learning is so poor in most UK medical schools. The traditional way of giving timely feedback has collapsed as the ward / bedside model of teaching has almost disappeared; and teaching is horribly fragmented because we have organised teaching around the working lives of full time clinicians, rather than what students need (or what they pay for). When waiting times are out of control, when ‘bodies’ are queued up on trolleys, and when for many people getting a timely appointment to see a NHS doctor is impossible, it is self evident that a tweak here and there will achieve very little. Without major change things will get much worse.

    When MIT under Chuck Vest put all of their coursewhere on line it merely served to illustrate that the benefits of MIT were not just in the materials, but in ‘being there’. And ‘being there’ is made up on other students, staff, and the interactions between these two groups.

    Medical schools were much smaller when I was a medical student (1976-1982). Nevertheless, there was remarkably little personal contact, even then. Lectures were to 130+ students, and occasional seminars were with groups of 10-12. Changing perspective, students did recognise the Dean of Medicine, and could name many of the lecturers who taught them. Integration of the curriculum had not totally disrupted the need for a course of lectures from a single person, and the whole environment for learning was within a physical space that was — appropriately enough — called a medical school: something obvious to the students was that research and teaching took place in the same location. For the first two years, with one possible exception, I was fairly confident that nobody knew my name. If a student passed a lecturer in the street, I doubt if the lecturer would recognise the student, let alone be able to identify them by name.

    Two members of staff got to know me in the first term of my opening clinical year (year 3): Nigel Speight, a ‘first assistant’ (senior registrar / lecturer) in paediatrics; and Sam Shuster, the Professor of Dermatology in Newcastle, who I started a research project with. For paediatrics, I was one of four junior students attached to two 30-bedded-wards, for ten weeks. It was very clear that Nigel Speight was in charge of us, and the four of us were invited around to his house to meet his kids and his wife. It was interesting in all sorts of ways — “home visits” as we discovered in general practice, often are — but I will not go into detail here.

    Sam invited me around for an early evening dinner and I met his wife (Bobby), and we talked science, and never stopped — except to slag off Margaret Thatcher, and Milton Friedman. Meeting Sam was — using Christensen’s phrase — my ‘change of life’ moment. As I have written elsewhere, being around Sam, was electric: my pulse rate stepped up a few gears, and in one sense my cortical bradycardia was cured.

    There are those who say that meaningful personal contact is impossible in the modern ‘bums on seats’ research university. I do not agree, although it is not going to happen unless we create the necessary structures, and this does not involve bloody spreadsheets and targets. First, even in mega-universities like the Open University, with distance learners, it was shown to be possible. Second, in some collegial systems, close personal contact (and rapid verbal feedback!) is used to leverage a lot of private study from students. In the two years I did research under Sam’s supervision (as an undergraduate — not later when I worked for him as a full time researcher), I doubt that I spent more than six hours one-to-one with him.

    How you leverage staff time to promote engagement and learning is the the single most important factor in giving students what they need (and often what they want, once they know what that is ). We will continue to fail students until we realise what we have lost.

  • 04/04/2019

    Post Link

    P53: You have no idea

    P53 and Me | NEJM

    A long, long time ago, I published papers on p53 and skin (demonstrating p53 upregulation in a UVR wavelength specific way). But germline mutations are something else. The account below is from a US medical student with Li-Fraumeni syndrome (germline p53 mutations)

    The changes to my outlook, my psyche, have been much more profound. It’s impossible to describe the unique panic that comes with imagining that any of your cells could decide to rebel at any moment — to propagate, proliferate, “deranged and ambitious,” as my anatomy professor remarked of cancer. It sounds like a paranoid medical student’s fugue-state nightmare. Any cancer at any time: a recurrence, a new primary, a treatment-related malignancy. Some are more likely than others: brain, colon, leukemia, sarcomas. But the improvisation of my cells and their environment is the only limit. And then there are more practical questions: Should I wear sunscreen every day, or is it better just to stay inside?

    I recently saw a college friend I hadn’t seen in 10 years and told her about my mutation. Nonmedical people react badly to such news. Medical people probably would, too, but we have rehearsed emotional distance, so our reactions often stay internal, to be unearthed later. “You must be very careful about what you…eat? Drink? What you…put into your body?” she said.

    “No,” I said. “There’s no point to that.”

    “Oh,” she said, saddened. “This must have changed you. It must really affect the way that you see…the world?”

    I nodded, thinking, You have no idea.

    Indeed.

  • 03/04/2019

    Post Link

    Science and nonscience

    I like statistics and spent most of my intercalated degree ‘using’ medical stats (essentially, writing programs on an IBM 360 mainframe to handle a large dataset, that I could then interrogate using the GLIM package from the NAG). Yes, the days of batch processing and punchcards. I found — and still find — statistics remarkably hard.

    I am always very wary of people who say they understand statistics. Let me rephrase that. I am very suspicious of non-professional statisticians who claim that they find statistics intuitive. I remember that it was said that even the great Paul Erdos got the Monty Hall problem wrong.

    The following is from a recent article in Nature:

    What will retiring statistical significance look like? We hope that methods sections and data tabulation will be more detailed and nuanced. Authors will emphasize their estimates and the uncertainty in them — for example, by explicitly discussing the lower and upper limits of their intervals. They will not rely on significance tests. When P values are reported, they will be given with sensible precision (for example, P = 0.021 or P = 0.13) — without adornments such as stars or letters to denote statistical significance and not as binary inequalities (P  < 0.05 or P > 0.05). Decisions to interpret or to publish results will not be based on statistical thresholds. People will spend less time with statistical software, and more time thinking.

    Scientists rise up against statistical significance

    There is lots of blame to go around here. Bad teaching and bad supervision, are easy targets (too easy). I think there are (at least) three more fundamental problems.

    1. Mistaking a ‘statistical hypothesis’ for a scientific hypothesis, and falling into the trap of believing that statistical testing can operate as some sort of truth machine. This is the intellectual equivalent of imagining we can create a perpetual motion machine, or thinking of statistics as a branch of magic . The big offenders in medicine are those who like adding up other people’s ‘P’ values — the EBM merchants, keen to sell their NNT futures.
    2. The sociology of modern science and modern scientific careers. The Mertonian norms have been smashed. It is one of the counterintuitive aspects of science that whatever its precise domain of interest — from astronomy to botany — its success lies less with a set of formal rules than a set of institutional and social norms. Our hubris is to have imagined that whilst we cling to the fact that our faith in science relies on the ‘external test in reality’, we ignored how easy it is for the scientific enterprise to be subverted.
    3. This is really a component of the previous point (2). Although communication of results to others — with the goal of allowing them to build on your work — is key, the insolence of modern science policy has turned the ‘endgame’ of science into this communication measured as some ‘unit’ based on impact factor or ‘glossy’ journal brand. But there is more to it than this. The complexity of modern science often means that the those who produce the results of an experiment or observation are not in a position to build upon them. The publication is the end-unit of activity. So, some bench assay or result on animals might lead others to try and extend the work into the clinic. Or one trial might be repeated by others with little hard thought about what exactly any difference means.Contrast this with the foundational work performed by Brenner, Crick and others. Experiments were designed to test competing hypotheses, and were often short in duration — one or maybe two iterations might be performed in a day. Inaccuracy or mistakes were felt by the same investigator, with the goal being the creation of a large infrastructure of robust knowledge. Avoiding mistakes and being certain of your conclusions would allow you not to (subsequently) waste your own time. If you and your family are going to live in a house, you are careful where you lay the foundations. If you plan to build something, and then sell to make a fast buck, the incentives lie in a different place. Economists may be wrong about a lot of things — and should be silent on much more — but they are right about two important things: institutions and incentives matter. Period.

    Science has been thought of as a form of ‘reliable knowledge’. This form of words always sounded almost too modest to me, especially when you think how powerful science has been shown to be. But in medicine we are increasingly aware that much modern science is not a basis for honest action at all. Blake’s words were to the effect that ‘every honest man is a prophet’. I once miswrote this in an article I wrote as ‘every honest man is for profit’. Many an error….

  • 02/04/2019

    Post Link

    Turn-it-around

    A couple of articles from the two different domains of my professional life made me riff on some old memes. The first, was an article in (I think) the Times Higher about the fraud detection software Turnitin. I do not have any firsthand experience with Turnitin (‘turn-it-in’), as most of our exams use either clinical assessments or MCQs. My understanding is that submitted summative work is uploaded to Turnitin and the text compared with the corpus of text already collected. If strong similarities are present, the the work might be fraudulent. A numerical score is provided, but some interpretation is necessary, because in many domains there will be a lot of ‘stock phrases’ that are part of domain expertise, rather than evidence of cheating. How was the ‘corpus’ of text collected? Well, of course, from earlier student texts that had been uploaded.

    Universities need to pay for this service, because in the age of massification, lecturers do not recognise the writing style of the students they teach. (BTW, as Graham Gibbs has pointed out, the move from formal supervised exams to course work has been a key driver of grade inflation in UK universities).

    I do not know who owns the rights to the texts students submit, nor whether they are able to assert any property rights. There may be other companies out there apart from Turnitin, but you can see easily see that the more data they collect, the more powerful their software becomes. If the substrate is free, then the costs relate to how powerful their algorithms are. It is easy to imagine how this becomes a monopoly. However, if copies of all the submitted texts are kept by universities then collectively it would make it easier for a challenger to enter the field. But network effects will still operate.

    The other example comes from medicine rather than education. The FT ran a story about the use of ‘machine learning’ to diagnose retinal scans. Many groups are working on this, but this report was about Moorfields in London. I think I read that as the work was being commercialised, then the hospital would have access to the commercial software free of charge. There are several issues, here.

    Although, I have no expert knowledge in this particular domain, I know a little about skin cancer diagnosis using automated methods. First, the clinical material and annotation of clinical material is absolutely rate limiting. Second, once the system is commercialised, the more any subsequent images can be uploaded the better you would imagine the system will become. This of course requires further image annotation, but if we are interesting in improving diagnosis, we should keep enlarging the database if the costs of annotation are acceptable. As in the Turnitin example, the danger is that the monopoly provider becomes ever more powerful. Again, if the image use remains non-exclusive, then it means there are lower barriers to entry.

  • 28/03/2019

    Post Link

    All in the stars

    All in the stars

    The story is about the ‘approval’ by the Norwegian higher education regulator of courses in astrology. The justification is interesting, relying on the fact that “astrologers had good employment prospects”. So that is alright then. To be fare the regulators argue that the can only enforce the ‘law’, as is. You can find similar such goings on close to the homes of many of us in the UK. (Time Higher Education, 28th March, 2019).