Category Archives: Musings

September 26, 2016

Two faulty beliefs about IMEs & impartial physicians

Patients and their advocates tend to be skeptical about independent medical opinions.   There are legitimate reasons to be concerned.  However, I want to point out two common but faulty beliefs that create UNNECESSARY distrust in this aspect of disability benefits and workers’ compensation claim management systems.  First, despite patients’ faith in their own doctors, treating physicians as a group are NOT a reliable source of accurate and unbiased information.  Second, although justice IS even-handed, impartial physicians should not find for both sides equally.

Based on my experience leading teams on three consulting projects that audited the quality of more than 1400 reports of independent medical evaluations and file reviews I definitely share MANY other concerns about the quality of the reports, the process by which they are procured, and the physicians and other healthcare professionals who provide them.  But these two particular issues are not among them. Read on to find out why.

FACT:  As a group, treating physicians are NOT a reliable source of accurate and unbiased information

First is the incorrect belief that the treating physician is the BEST place to turn for an “independent” opinion because they are highly trained professionals who are familiar with the patient’s case.   There are two main reasons why this is incorrect:

(a) There is considerable variability in the appropriateness and effectiveness of the care delivered by practicing physicians, and patients are not in a good position to assess it.  Evaluating appropriateness and effectiveness is admittedly a difficult and imperfect process, but the best way we know to do it is through the eyes of another physician who is equally or more expert in the matter at hand — and has no axe to grind and no financial stake in the outcome:  neither a friendly colleague nor a competitor.

(b) In medical school and residency, physicians are often told they should be “patient advocates” — but that instruction may not include a definition of advocating. (True for me and many others in physician audiences when I have asked about it.)  Patient advocacy sometimes turns into doing or saying exactly what the patient wants, not what is actually in the best interest of the patients’ long term health and well-being.  (I call this being a McDoctor.)  Particularly in today’s world with fierce competition between medical groups for patients and the use of “patient satisfaction scores” in calculating physician bonuses, that is true.  The data is clear:  treating physicians provide unnecessary antibiotics, pain medications, inappropriate treatments and are even willing to even shade the truth on reports in order to keep their patients happy.

The reason why arms-length or “third party” physicians are preferred as the source of opinions is to protect patients from harm from EITHER the “first party” (treating physician) OR the “second party” (the payer — which has an OBVIOUS business interest in controlling cost).  Judges, public policy people, and I get uncomfortable when the WAY the arms length physician is SELECTED is distorted by the interests of either the first party or second party.

FACT:   Impartial physicians’ opinions should not find for both sides equally

Second is the belief that “truly” impartial physicians should come down on the side of the patient vs. insurer half the time.  Or call it 50:50 for plaintiff vs. defense.  This belief is WRONG because cases selected for review or IME have been pre-selected by claims managers and case managers.   These professionals may not be healthcare professionals but because they see thousands of cases and become very familiar with the medical landscape, they ARE often more experienced OBSERVERS of the process of care than many physicians. They learn to recognize patterns of care that fit normal patterns, and care that is unusual.  These days, they are often expected to use evidence-based guidelines to identify outlier cases.  Those who focus on specific geographical areas come to see which doctors get patients better and which ones don’t.

The VAST MAJORITY of the time, there is no need / no reason to refer a case for independent review.  The treating physician IS doing the right thing;  the diagnoses, prescribed treatment, and causation determination (if work-related) DO appear reasonable and appropriate.   If the claims managers/ case managers see no problems or have no questions, they don’t refer the case for outside review.  If it aint busted, why fix it?

So as a rule of thumb, you can assume that some feature or another in ALMOST EVERY case being sent to review has RAISED QUESTIONS in the mind of an experienced observer of the care process.  The reason WHY the case is REFERRED is because that observer has only a very superficial knowledge of medicine.  They need an adviser — an impartial and expert physician who can evaluate the clinical facts and context and then either CONFIRM that the treating physician is doing the right thing or VALIDATE the claims/case manager’s concerns.

When claims/case managers are doing a good job selecting cases for referral, we SHOULD expect that MOST of the decisions will favor the insurer / defense. The more expert the claim/case managers are, the MORE LIKELY the independent physicians will agree — because the claims/case managers are accurately detecting real problems and concerns.

(By the way, a similar ratio seems to apply in the court system.   A judge once told me that MOST defendants ARE guilty – because the prosecutors don’t want to waste their time and public funds bringing cases to trial if they think the defendant is innocent – or if they simply think they will lose.    A perfect example  of this pragmatism is the FBI’s recent decision not to prosecute Hillary Clinton.  The Director made it clear that they didn’t want to waste the taxpayers’ money on a case in which they wouldn’t be able to convince a jury “beyond a reasonable doubt.”)

Consider this:  If you are a treating physician who FREQUENTLY ends up with your care plans rejected by claims managers and utilization review, consider the possibility that YOU stick out.  Your care patterns may be more unusual than you realize.  Your outcomes may be worse than your colleagues’.

Sadly, some physicians discredit input from independent experts in front of patients.  They THINK they are advocating for their patient — on a social justice crusade, but end up harming their patient instead — by teaching them they have been wronged, are a victim of “the system,” and a helpless pawn.  This message:

  • increases distrust, resentment and anger (which in turn worsens symptoms);
  • encourages passivity rather than problem-solving (which in turn increases the likelihood of job loss, permanent withdrawal from the workforce, and a future of poverty on disability benefits).

A former president of the Oregon Medical Association said he counsels patients this way:  “Your two most important treasures are your health and your job. And  I am here to help you protect both of them.”  Healthcare practitioners really ought to do everything they can help their patients find a successful way out of these predicaments, instead of allowing them to believe they are trapped.  The “system” is not designed to solve their life predicament for them — they may have to do it themselves.  The physicians’ care plans should consist of those treatments known to restore function and work ability most rapidly.  Physicians should encourage their patients to tell their employer they want find a way to stay productive and keep their jobs.  And if the employer won’t support them, physicians should counsel their patients to try to find a new job quickly — even if it’s temporary or they have to make a change to the kind of work they do.

Adapting to loss is a key part of recovery.   When I was treating patients, I could tell they were going to be OK when they said with pride “I’ve figured out how to work around it, and life is getting back on track.”


September 9, 2016

Pithy 4-min Video & 1-page Manifesto for you to use

Mathematica just released a 4-minute video of me pointing out why the work disability prevention model is important — in plain language.  The video was made at the request of the US Department of Labor’s Office of Disability Employment Policy (ODEP).  The main messages in the video are:

  1. MILLIONS of Americans lose their jobs every year due to injury and illness;
  2. Worklessness and job loss have been shown to harm physical and mental health as well as personal, family, social, and economic well-being;
  3. Worklessness and job loss should therefore be considered poor healthcare outcomes;
  4. Unexpectedly poor outcomes can often be prevented and there is good research evidence about how to do that;
  5. Changes need to be made so that vulnerable people get what they need at the time when they need it — and as a result are able to have the best possible life outcome, stay in the workforce, and keep earning their own living.

In addition, the video also explains WHY and HOW some people have unexpectedly poor outcomes of conditions that do not normally cause significant work disruption and job loss.  Unless you’re in my line of work, it is hard to understand why things turn out badly in some cases and not in others — especially if they looked exactly the same at the beginning.

The video is loosely based on a one-page Work Disability Prevention Manifesto I wrote.  I put a draft of it on this blog last spring and got many useful comments.  After many revision cycles, it is now as succinct and compelling as I know how to make it.  ODEP had no hand in the Manifesto; it’s my independent work.

I’m glad I can now share these two items with you because the WORLD needs to know more about these issues—and most PEOPLE in the world have a very short attention span and no interest in the topic to begin with.   I hope you will pass this stuff along to the people whose thinking you want to change or whose buy in you need. Then maybe THEY will pass it along to others as well.  Social norms ONLY SHIFT when people share powerful mind-opening ideas with one another.

Lastly, let’s all stop speaking ABOUT these problems.  It is time for us all to start speaking FOR action and FOR changes.

WORK DISABILITY PREVENTION MANIFESTO
©Jennifer Christian, MD, MPH August 2016

Preventable job loss demands our attention

  • Millions of American workers lose their jobs each year due to injury, illness or a change in a chronic condition.
  • Preserving people’s ability to function and participate fully in everyday human affairs, including work, is a valuable health care outcome, second only to preserving life, limb, and essential bodily functions.
  • A new medical problem that simultaneously threatens one’s ability to earn a living creates a life crisis that must be addressed rapidly and wisely. Most people are unprepared for this double-headed predicament. It can overwhelm their coping abilities.
  • When medical conditions occur or worsen, especially common ones, most people are able to stay at or return to work without difficulty. However, many prolonged work disability cases covered by private- and public-sector benefits programs began as very common health problems (for example, musculoskeletal pain, depression, and anxiety) but had unexpectedly poor outcomes including job loss.
  • Loss of livelihood due to medical problems is a poor health outcome. Worklessness is harmful to people’s health, as well as to their family, social, and economic well-being.

Why do such poor outcomes occur?

  • Medical conditions by themselves rarely require prolonged work absence, but it can look that way. Both treatment and time off work are sometimes considered benefits to be maximized, rather than tools to be used judiciously.
  • Professionals typically involved in these situations (health care providers, employers, and benefits administrators) do not feel responsible for avoiding job loss.
  • Unexpectedly poor outcomes are frequently due to a mix of medical and nonmedical factors. Diagnosed conditions are inappropriately treated; others (especially psychiatric conditions) are unacknowledged and untreated. The employer, medical office, and insurance company (if there is one) operate in isolation, with little incentive to collaborate.
  • Without the support of a team focused on helping them get their lives back on track, people can get lost in the health care and benefits systems. With every passing day away from work, the odds worsen that they will ever return. After a while, they start to redefine themselves as too sick or disabled to work.
  • When people lose their jobs and do not find new ones, they barely get by on disability benefits and are vulnerable to other detrimental effects.

How can we fix this problem?

  • Good scientific evidence exists about how unexpectedly poor outcomes are created, how to avoid them, and how health care and other services can protect jobs.
  • Health-related work disruption should be viewed as a life emergency. Productive activity should be a part of treatment regimens.
  • When work disruption begins, it can be both effective and cost-beneficial to have a coordinator help the individual, treating physician, and employer communicate and focus everyone’s attention on maximizing recovery, restoring function, accommodating irreversible losses, and making plans for how the individual can keep working, return to work, or quickly find a more appropriate job.
  • We must urgently establish accountability for work disability and job loss in our workforce, health care, and disability benefits systems and build nationwide capacity to consistently deliver services—just in time, when needed—that help people stay at work or return to work.

July 19, 2016

Overcoming fear of sharing our work with others

It’s scary to make a suggestion or share a work sample on a social networking site or a list serv in an effort to help less expert colleagues.  There’s a risk that an even-more-expert colleague will point out the flaws, or even make belittling comments.  If they’re kind, the expert will do it in private.  If not, there is the possibility of gossip behind one’s back, or public humiliation.

A colleague I deeply respect recently took that chance — not because he’s the world’s expert on a particular topic, but because he has a commitment to generously sharing what he does know for the benefit of others.  His goal in sharing his work product was to upgrade the way a particular issue is usually handled across the country.  That’s why I admire my colleague.  He offered a very concrete work product for others to use if they would like.

Fear of humiliation and being incompetent lie one millimeter beneath my skin. That fear, which is pretty common among humans, runs rampant in physicians.  It was intensified by our severe socialization during medical school and internship.  I hesitate every time I put any of my own thoughts or work “out there” for all to see.

I’m not alone in having this fear of being upstaged by someone more expert. For example, a doctor recently unsubscribed from the ACOEM Work Fitness & Disability Section list-serv with this comment:   “I joined the WFD section because I presumptuously (perhaps arrogantly) thought that given my decades of trying to navigate the rocky coastlines of fitness for duty and disability management I might actually have something of value to offer the newbies who might post questions.  So I responded to couple of posts and …… Well, let me tell you, I may be a big fish expert in my insular little pond, but soon recognized that the WFD Section is replete with knowledgeable, articulate, and fluent experts.  I really didn’t have much of anything new to offer. It was kind of like the experience of being at or near the top of your class in a suburban  high school then getting into a competitive college in the big city where everyone is as smart as you or smarter. So you folks don’t need me; you’ve got it covered. And I’m not fishing for compliments or encouragement either (which you couldn’t offer anyway since you don’t know me), just keeping it real.”

Got any ideas for how to solve this cultural problem?  I don’t — other than to point out these three aphorisms which seem relevant:

  • “Don’t let the excellent drive out the good.”
  • “You may need to lower your standards in order to improve your performance.”
  • “In the land of the blind, the one-eyed is king”

Fear of sharing stifles collaboration and innovation — so it inhibits any community’s ability to upgrade its current prevailing level of quality — “what typically happens”.  There’s something wonderful about people contributing what they DO KNOW.  There’s something wrong about being made to feel bad if it turns out someone else is EVEN MORE expert or wise.  So, perhaps we need to ponder, in the “land of the blind”:
— how a kind and respectful person with binocular vision (“the nation’s top expert at seeing”) should behave towards blind and the (rare) one-eyed people?
— how one-eyed people could best respond to input from the (very rare) binocular individuals?
— how blind people should differentiate between the (rare) one-eyed individuals and the (very rare) binocular people?

In the meanwhile, here is what happened with my colleague.   I received feedback that there were some inadequacies in his work product.  I sent that feedback along (anonymously by request) to him.  I ended my email with this:  “On behalf of all of those who are less well organized and systematic than you are, and for whom your tool provides a concrete model of what ‘good looks like’ — thank you for this contribution.   And, please, if you have the time, use the feedback to go take it up a notch!”

His response: “I’m very open to discussions on ways to improve this document.  I look forward to input of all sorts.”   He also plans to teach a session on how to use the “new & improved version” at our professional development conference next year.   THIS is the kind of professional behavior I DEEPLY ADMIRE.


July 6, 2016

Where does working age end? Who is too old to work?

I’ve been trying to draw more attention to the special healthcare needs of the working age population since they power the engine of the economy.  The healthcare industry needs to expand its focus beyond symptoms and select treatments that rapidly restore the ability to function in this group  — to help them recover faster and more completely, to keep their jobs and livelihoods, and avoid the negative consequences of prolonged worklessness for them and their family!  Doctors and other healthcare professionals often don’t really THINK ENOUGH about the impact of their treatment regimens on working people’s lives outside the office.

But as I advocate, I’ve begun pondering that definition: “working age”.  It seems safe to use 18 as the low end of the range (even though kids younger than that do work, most of them are still in school).  But what about the top end?  At what age should we stop seeing work as the norm?  Stop expecting anyone to work?   Start thinking it’s silly to insist on working?  What term should we use to describe those who have lived for a really long time but are still very active and working?  What term should we use to describe people who are the exact same age but the press of years has made them too feeble to work anymore, even though they are “healthy”?  We all know people in both of these categories.  Simply calling them both old seems inaccurate.

I found a thoughtful article from the World Health Organization (WHO) exploring how to define “old” or “elderly” — in Africa!   Have you noticed how often we notice oddities about our own culture when we look outside it?  That’s when we notice the automatic assumptions and blind spots we’ve been living with.

I think you’ll enjoy reading the excerpts I’ve pasted below from the full WHO article.  I have colored in red the parts I found most eye-opening.  They are a breath of realistic and straight-spoken fresh air about how humans age.

Bottom line as I see it:  In developing societies where the administrative and legal fictions of retirement and pensions do not exist, the people tend to define old and elderly straightforwardly and on a case by case basis depending on the actual circumstances of humans as they accumulate years (and as younger generations come behind).  Old age begins when one assumes the social role of an elder, when one withdraws from social roles either because it is time for someone younger to take over or because of decline in physical / mental capability.  And finally, when it is no longer possible to actively contribute, one is definitely well into old age.

By that reasoning, if you are still able to play the roles and carry the same load of a person a decade younger, you are not old yet.  I still don’t know what to call you though.  Or, more truthfully, I don’t know what to call myself.  I am still in there pitching though I turn 70 years old this year.  I did recently give up one of my roles to make room for a younger person who deserved her day in the sun.  Didn’t want to hog it and hold her back.

Proposed Working Definition of an Older Person in Africa for the MDS Project

Most developed world countries have accepted the chronological age of 65 years as a definition of ‘elderly’ or older person, but like many westernized concepts, this does not adapt well to the situation in Africa. While this definition is somewhat arbitrary, it is many times associated with the age at which one can begin to receive pension benefits.

Although there are commonly used definitions of old age, there is no general agreement on the age at which a person becomes old. The common use of a calendar age to mark the threshold of old age assumes equivalence with biological age, yet at the same time, it is generally accepted that these two are not necessarily synonymous.

As far back as 1875, in Britain, the Friendly Societies Act, enacted the definition of old age as, “any age after 50”, yet pension schemes mostly used age 60 or 65 years for eligibility. (Roebuck, 1979). The UN has not adopted a standard criterion, but generally use 60+ years to refer to the older population (personal correspondence, 2001).

The more traditional African definitions of an elder or ‘elderly’ person correlate with the chronological ages of 50 to 65 years, depending on the setting, the region and the country. ….. In addition, chronological or “official” definitions of ageing can differ widely from traditional or community definitions of when a person is older.  Lacking an accepted and acceptable definition, in many instances the age at which a person became eligible for statutory and occupational retirement pensions has become the default definition. ….

Defining old
“The ageing process is of course a biological reality which has its own dynamic, largely beyond human control. However, it is also subject to the constructions by which each society makes sense of old age. In the developed world, chronological time plays a paramount role. The age of 60 or 65, roughly equivalent to retirement ages in most developed countries, is said to be the beginning of old age.

In many parts of the developing world, chronological time has little or no importance in the meaning of old age. Other socially constructed meanings of age are more significant such as the roles assigned to older people; in some cases it is the loss of roles accompanying physical decline which is significant in defining old age. Thus, in contrast to the chronological milestones which mark life stages in the developed world, old age in many developing countries is seen to begin at the point when active contribution is no longer possible.” (Gorman, 2000)

Categories of definitions
When attention was drawn to older populations in many developing countries, the definition of old age many times followed the same path as that in more developed countries, that is, the government sets the definition by stating a retirement age. Considering that a majority of old persons in sub-Saharan Africa live in rural areas and work outside the formal sector, and thus expect no formal retirement or retirement benefits, this imported logic seems quite illogical. Further, when this definition is applied to regions where relative life expectancy is much lower and size of older populations is much smaller, the utility of this definition becomes even more limited.

Study results published in 1980 provides a basis for a definition of old age in developing countries (Glascock, 1980). This international anthropological study was conducted in the late 1970’s and included multiple areas in Africa. Definitions fell into three main categories: 1) chronology; 2) change in social role (i.e. change in work patterns, adult status of children and menopause); and 3) change in capabilities (i.e. invalid status, senility and change in physical characteristics). Results from this cultural analysis of old age suggested that change in social role is the predominant means of defining old age. When the preferred definition was chronological, it was most often accompanied by an additional definition.

…… If one considers the self-definition of old age, that is old people defining old age, as people enter older ages it seems their self-definitions of old age become decreasingly multifaceted and increasingly related to health status (Brubaker, 1975, Johnson, 1976 and Freund, 1997).


June 12, 2016

Almost embarrassed to mention my near miss with “disfigurement”

It’s almost embarrassing to mention my own personal experience with my birthmark, because it is so trivial.  But when I think about why my birthmark has NOT had much impact on my life, I see clearly the impact of the messages that kids get from their parents and the world around them.  I feel lucky and grateful — and more aware of the ways that the lives of others with the same or more substantial “imperfections” and “impairments” may have been changed by the reactions of their parents and the world around them.

There was a defining moment in my life when I had a near miss with defining myself as “disfigured”  Had that moment gone another way, I might have carried myself  differently, dealt with other people differently, and adjusted my view of the future that was possible, given my situation.

I do have a red birthmark on my cheek — roughly 2″ in diameter.  It has grown darker with the passage of time.  It was pale red in my youth.  In my mind, it has never been a big thing — but over the years I have come to realize that everyone doesn’t agree with that assessment.

My life would have been very different if my parents had taught me to see myself as disfigured. Daddy was a pediatrician, and approached every parenting issue from this point of view:  “What will create the best adult out of this kid?”

While I was in elementary school there must have been a specific day when I asked a question about my birthmark — because I remember what happened next.   They pointed out that everyone has “imperfections”.  In fact,  we (the four kids) then took turns combing over each other’s bodies until we did find some kind of mark on all of us.  Theirs were QUITE subtle: a pale brown “café au lait” spot on an arm, a patch of unusually thick and hairy skin.  Nothing on the face.   But as far as I was concerned, that proved the truth of my parents’ statement.

They also claimed that during Colonial times, people thought the best way to tell who was a witch was to look for an imperfection — because witches look perfect and real human beings don’t.  Later, with no apparent awareness of any philosophical contradiction, my parents also pointed out that only God is perfect. Overall, their explanations answered my questions, met my needs, reassured me, and settled the issue. I didn’t sense that anything was “wrong” with my birthmark, my face, or my appearance. I decided I was fine in that department.  And in fact, my parents indirectly told me I was pretty by constantly reminding me that  “beauty is only skin deep” and focusing their energy on molding and developing my “insides”.

So I grew up feeling pretty “normal” and reasonably confident — without any belief that I had any appearance problems beyond the ones that most kids have (pimples, hair, etc.).  No thoughts about my birthmark at all, really.  I wasn’t afraid of other people’s reactions to it.   In fact, I was only rarely asked about it, and then I would simply answer “it’s a birthmark.”   I was never ever teased about it or bullied — nor even called names.  A few years ago I realized that my maiden name (Harting) would have made a great rhyme with farting!  As things turned out, I was quite a popular kid, elected Homecoming and Junior-Senior Prom Princess two years in a row, etc — all the while feeling the usual terrible insecurities and desires of adolescence. Using my mother as my role model, I never wore any make up at all unless it was a dress-up occasion.  That’s still true now, except I wear cover-up for presentations as well as fancy parties.

It wasn’t until I was in my mid-30’s and went to a cosmetics counter to ask for cover-up makeup that the salesperson’s reaction taught me my attitude was unusual.  She said most women in my situation WHISPER when they approach to ask her for cover-up.  A few months later, a woman my mother’s age (with whom I had gotten quite close) asked why I didn’t cover my birthmark.   I acted surprised and said it just didn’t seem necessary.  She then shared her theory: she had decided my coping strategy was to be defiant and “flaunt it.”  I was stunned. I had had no inkling she thought my birthmark WAS a big deal.

She did get me to thinking though.  I realized that if other people notice the birthmark, I can use it.  So now when I arrange to meet people I’ve never seen before, I sometimes remember to say I have a birthmark on my right cheek.  It’s useful as an “identifying mark” as the cops would say!

What a gift my parents gave me!   Imagine how different my perception of myself and the way the world was looking at me — and thus my interactions with the world — could have been. All because of a little 2″ square red mark on the skin, that affects no function at all.  I suspect that my outgoing and dominant temperament / personality and social skills have also been a help.  The fact I never experienced bullying of any kind (until I was in medical school) indicates to me that I am by nature near the top of the human version of the hen house pecking order.

I sat next to a guy on a plane recently with a facial birthmark like mine–a hemangioma.  His was BIG.  It covered half his face and neck.  In some places, dark blue-red skin hung in folds, almost like a turkey.  It really was disfiguring — from my vantage point, one side of his face didn’t look like a “regular” face at all.  The guy was SUPER chatty, outgoing, and engaging.  He was almost so “in your face” as to be socially odd.  But after a minute or two, the net result was that he put everyone at ease because our focus shifted to the topic of conversation instead of his appearance.  Turns out he is a specialist in some sort of technological field (forget exactly what) and is on the road constantly.  That means being out in public and meeting new people in new situations is his everyday reality.  I decided that his extreme extroversion is the way he has responded to his predicament — the way he copes with the outside world’s reaction to his appearance.  Rather than shrink away, ashamed, in an attempt to be invisible and thus sidelining himself, he has INSISTED on his right to participate fully in life by INSERTING himself in the middle of it.  I wonder what HIS parents told him as a child…….

I guess you could say that guy on the plane and I have ended up in similar places — with an approach sort of like the puppy in The Present video.  If you don’t SEE yourself as disfigured / disabled, and/or if you don’t even CARE that you are, life is a lot bigger and more fun.

Which makes me wonder:  why do we insist that people define themselves as “disabled” in order to be eligible for “reasonable accommodations” that would let them get or keep a job?  Why can’t we just ask them to explain what they cannot do without the accommodation — instead of insisting that they label themselves?


June 5, 2016

Want to hear my “personal elevator pitch” — and create your own?

I recently developed a brief answer to the question “what do you do?” after watching a 2013 TEDx Talk on “How to Know Your Life Purpose in 5 minutes” by Adam Leipzig.  He called it a personal rather than business version of “an elevator pitch”.  (NOTE: When you’re trying to raise venture capital or make a big sale in business, the elevator pitch is the quick summary you can deliver to a prospective funder or client in the time it takes for the elevator to reach your floor.)

Want to come up with your own personal elevator pitch too?   Get a piece of paper and then watch Adam’s TEDx talk.  His talk isn’t perfect and the process felt kind of forced and dorky — but I went along and did what he asked us to do, including answering out loud. I think I was willing to do so because he started by talking about his 25th college reunion, how unhappy most people were, and how the happy ones differed from the unhappy ones.  And then the actual exercise was surprisingly meaningful and very quick.

Afterwards, Adam pointed out an important feature of the kind of answer he had us design:  it makes most listeners want to ask a follow up question:  “HOW do you do that?”   And then there is an opportunity for a real conversation.

Here’s my answer to the question about what I do (as of spring 2016):

“Because of what I do, people feel inspired to make changes for the better — and because they also feel more willing, prepared, and confident, they actually start doing things in a new way.”

So now, are you curious HOW and WHERE and WHEN I do that?

Doing this exercise was really satisfying.  I keep a copy of my answer handy.  That single sentence has made me feel clearer and calmer inside about the unusual commitments and drive, talents, and unearned gifts with which I was endowed by my maker (thank you God or random chance).  I can feel deep in my bones how much I love serving as a channel through which the new energy that creates better outcomes is released.  Sometimes I think of myself as a “midwife for possibility”.