This is not a case study written after the fact by someone who had a plan and executed it cleanly. Decisions were made without full information. Problems were found after they'd already been running for weeks. A few things had to be rebuilt from scratch mid-way through.
The numbers are real. The problems are real. And some of them had nothing to do with advertising — they showed up in what happened after a lead came in, which turned out to matter just as much as how the lead got there.
This is the full story, from the first day.
DAY ONE. EVERYTHING IS MISSING.
The concept was sharp: connect patients in underserved markets — primarily Nigeria and wider Africa — with specialist healthcare in India. Haematology. Bone marrow transplants. Oncology. Complex surgeries where the gap between what's available locally and what's needed is significant.
The model was simple: patients seeking specialist medical care — primarily from Nigeria and across Africa — get connected to hospitals and doctors in India. Everything in between, from the initial consultation to travel and appointments, is handled end to end.
Good model. Real need. Experienced founders. And then I logged into the systems for the first time.
There was no website. No Google Analytics. No Search Console. No Google Ads account. No tracking of any kind. The company had been operating on referrals and networks, with no digital presence to speak of.
I want to be precise about what "no digital presence" actually means in practice, because it's easy to read that and think I'm exaggerating. There was a domain. There was a placeholder — a holding page — but nothing that could receive traffic, convert a visitor into a lead, or tell me anything about who was finding us, how they were finding us, or what they did when they got there. No pixels, no tags, no goals, no events. If someone in Lagos had typed our company name into Google that week and landed somewhere, we would have had no record of it happening.
For a company targeting patients in Nigeria and Africa who are desperately researching treatment options — often at two in the morning, on a phone, terrified about a diagnosis they received that week — that was a problem that needed to be solved before anything else.
I spent the first week doing nothing but mapping what needed to exist before I could build anything useful. Website. Analytics. Search Console. Tag Manager. Conversion tracking. Account structure. Keyword research. Landing pages. Campaigns last.
GETTING THE WEBSITE RIGHT. BEFORE THE ADS, BEFORE EVERYTHING.
The instinct in this situation is to get the ads running first and sort the website out later. I understood the pressure — the founders wanted leads, and they wanted them soon. But sending paid traffic to a placeholder page burns budget and proves nothing. I made the case for sequencing it correctly, and the first month was infrastructure.
The company had an external IT team who handled the technical build. My role was to define everything that would affect ad performance and conversion: the site architecture, the brand system, the page structure, the copy, and critically — the performance standards the site had to meet before we pointed paid traffic at it. Mobile page load speed is a direct input into Google's Quality Score via the "Landing Page Experience" component. A slow site means higher CPCs in every single auction, indefinitely. So those performance requirements weren't aesthetic preferences — they were ad account constraints I set upfront and worked with the IT team to hit.
The brand direction I defined was built around the nature of the service. Patients arriving at this site have usually already been through something frightening. Deep evergreen tones, restrained layout, clean typography — nothing that felt clinical or transactional. The content on every page was structured around the questions a patient or family member actually types at two in the morning, not what a company brochure would say about itself.
Eight core pages: home, services, about, the doctor consultation pathway, condition-specific pages for haematology and oncology, a destinations page for the India hospital network, and a contact page built around a single focused lead form. Site went live in late July — functional, reasonably fast on mobile, and with tracking in place from day one.
Separately: I also developed a full rebuilt version of the site independently in Next.js — a complete multi-page rebuild with improved architecture and performance. That version is currently under internal review. It's the version I'd point to as a cleaner example of what the site should eventually become.
SETTING UP THE MEASUREMENT STACK. THE UNGLAMOROUS PART NOBODY TALKS ABOUT.
Nobody writes case studies about setting up Google Search Console. Nobody posts about linking GA4 to Tag Manager on a Tuesday afternoon. It doesn't have a dramatic number attached to it. But I want to write about it here, because if you skip this step — or rush it — you spend months making decisions based on data you can't actually trust. I've seen what that looks like. I wasn't going to let it happen here.
GOOGLE SEARCH CONSOLE
First thing I verified once the site was live. Submitted the sitemap. Confirmed domain ownership via DNS record. Began monitoring for crawl errors and index coverage from day one. This isn't directly about ads — it's about making sure Google can see what we've built, and catching technical issues before they compound quietly over weeks. Within the first fortnight I could see which pages were indexing and at what speed. No surprises later.
GOOGLE ANALYTICS 4
Set up a fresh GA4 property connected to the site through Google Tag Manager. Configured custom events for the moments in the user journey that actually mattered: scroll depth on service pages, time on the doctor consultation page, and — most importantly — form submission completions. If we couldn't reliably record when a form was submitted successfully, nothing the ads account told us about conversions could be trusted.
GOOGLE TAG MANAGER
Everything routes through GTM. Every pixel, every event trigger, every conversion tag lives in one place. The reason is operational: when anything needs to change — a new conversion action, a Meta Ads pixel, a heat mapping script — it happens in GTM without touching the site's codebase. In a setup where I was the only person managing both the website and the ads, this kind of system discipline was not optional. It was the difference between being able to move fast and spending days debugging tracking issues.
GOOGLE ADS CONVERSION TRACKING
One conversion action. Confirmation page view — the page that only loads after a form is submitted successfully. Not a button click, which fires even when form validation fails. Not a page visit to the contact page, which tells you nothing about intent. The confirmation page only appears when a genuine lead has been generated. That became the single signal the ads account would optimise for. Clean, unambiguous, accurate.
Before any campaign spent a single dirham, we had a definitive answer to the question: what is a conversion, and how do we know when one happens? Most accounts I've reviewed don't have this. They have four or five conversion actions, two of which are duplicates, one of which fires on the wrong trigger, and the algorithm is optimising for something that doesn't correspond to actual lead generation. We didn't have that problem because we built the measurement layer before the campaign layer.
THE RESEARCH PHASE. UNDERSTANDING WHAT WE WERE ACTUALLY SELLING.
Campaigns went live on August 15th. But before I touched the ads interface, I spent three weeks in research that most people skip. I want to walk through what that looked like, because it shaped every structural decision I made later — including some decisions I had to revisit when the data came back and told me I'd been wrong.
UNDERSTANDING THE PATIENT
The first thing I needed to internalise was that we were not selling a product. We were trying to be findable at one of the worst moments of someone's life. A family in Nigeria whose father has just been told he has a blood disorder the local hospital can't adequately treat. Someone in Oman whose child needs a bone marrow transplant and the pathway to getting it is unclear. These people don't search the way someone shopping for a phone searches. They search with urgency, with very specific terminology they've picked up from doctors and medical reports, and they search at strange hours when the fear is loudest.
That changes everything about how you write ad copy, how you structure landing pages, and which search terms you decide to be present for. You are not creating demand. You are being findable at the moment demand already exists and is acute.
KEYWORD ARCHITECTURE
I mapped the keyword landscape in three layers. High-intent procedural terms: people who know exactly what they need and are looking for where to get it. "Bone marrow transplant India cost." "BMT specialist hospital India." "Haematology treatment abroad." Low volume, very high intent. These are the terms where presence matters most.
Condition-awareness terms: people with a diagnosis who are still exploring the treatment pathway. "Aplastic anaemia treatment options." "Thalassaemia specialist abroad." "Best country for blood cancer treatment." A step further up the funnel, but still serious intent.
Geographic-intent terms: patients in our target markets looking for routes to care. "Medical treatment in India from Nigeria." "Affordable cancer treatment abroad from Africa." "Travel for surgery India." These were critical for reaching the specific audiences the company was built to serve.
I also built the negative keyword list before launch — not after. Medical queries attract enormous irrelevant traffic. People searching for symptoms. People researching without any intent to seek treatment abroad. People looking for jobs in Indian hospitals. If I didn't exclude these proactively, the first weeks of spend would disappear into clicks that were never going to become leads.
THE FIRST CAMPAIGNS GO LIVE.
August 15th. Six campaigns structured around intent tiers — each with its own budget, its own tightly themed ad groups, its own corresponding landing page. Responsive Search Ads across every group, fifteen headlines and four descriptions each, written to the specific search intent of each keyword cluster rather than generic copy.
Extensions built out fully from day one: callouts for the free consultation and end-to-end coordination, sitelinks to specific condition pages rather than the homepage, structured snippets listing actual procedures. The goal was to make the ad useful before someone even clicked — so the clicks we paid for came from people who already understood what we offered.
The first few weeks were deliberately treated as a data collection phase. Before smart bidding can work efficiently, the account needs a minimum volume of conversion signal. Running manual CPC initially and letting the campaigns accumulate real search term data was the plan — not to hit targets yet, but to have something honest to audit. After about six weeks, I ran that audit. The data had found some things worth looking at.
THE SIX-WEEK AUDIT. WHAT THE DATA ACTUALLY SHOWED.
The impression share report was the first thing that flagged. We were losing a significant chunk of available impressions — and the reason wasn't budget. Google was choosing not to show our ads as often as it could because the ads-to-landing-page experience wasn't scoring well enough in its auction logic. That pulled me into a full account audit: Quality Score breakdown by keyword, Page Speed Insights on every landing page, search term analysis, campaign structure review.
A Quality Score average of 5.74 means you're losing auctions to competitors who have the same bid as you but cleaner accounts. You're paying more per click than you should. You're showing less often than you could. And it's a compounding problem — lower Quality Score means worse Ad Rank, which means lower visibility, which means fewer clicks, which means slower conversion data for the algorithm, which means less efficient optimisation. Everything gets worse together, slowly and invisibly.
Running Page Speed Insights on the landing pages confirmed the source. Mobile LCP — the time it takes for the main visible content to appear on a mobile device — was at 5.4 seconds. Google's threshold for "Good" is 2.5 seconds. More than double, on mobile, in markets where most of the audience browses on a phone on a 4G connection.
Every campaign was throttled. Google wasn't entering the best auctions available — it was entering the cheapest ones. We were present in the market but not competitive in it.
High-intent terms like "bone marrow transplant India" and several condition-specific queries were appearing in three separate campaigns simultaneously. Our own campaigns were bidding against each other in the same auctions, inflating our own CPCs with no benefit to anyone.
One of the six campaigns had been live since August 15th. It had spent real budget. It had produced 1.2% of total conversions. It had never been reviewed or questioned. It was drawing resources away from every other campaign while contributing almost nothing to the actual lead pipeline.
Going through the actual queries that had triggered our ads, a significant proportion were people searching for symptoms, for specific local hospital names, for general health information with no treatment-seeking intent. The negative keyword list I'd built at launch wasn't deep enough.
THE REBUILD. FIXING THINGS IN THE RIGHT ORDER.
When you have multiple compounding problems in an ads account, the instinct is to start with the most visible one — the bids, the budget allocation — because those feel like immediate levers. That would have been the wrong move. The problems were chained. The landing page was suppressing Quality Score. Quality Score was suppressing Ad Rank. Ad Rank was suppressing impression share. Until the landing page was fixed, nothing else would hold. You start at the root, not the branch.
LANDING PAGE PERFORMANCE OVERHAUL
Went back into the codebase. Converted all hero and above-the-fold images to WebP format with explicit width and height attributes so the browser reserves space before the images load. Moved every non-essential third-party script to load after the main content rendered. Implemented lazy loading for below-the-fold imagery. The result: mobile LCP dropped from 5.4 seconds to 1.9 seconds. We moved from Google's "Poor" band into "Good." Within two weeks, Quality Scores across the account started improving.
CAMPAIGN CONSOLIDATION AND KEYWORD DEDUPLICATION
Built a full keyword matrix mapping every keyword in the account against every campaign and ad group it appeared in. Identified all cannibalization instances. Established clear intent-based segmentation with explicit cross-campaign negative keywords to enforce the boundaries. Paused the near-zero contributing campaign entirely and migrated its single performing ad group into the primary campaign. Went from six campaigns to five, each with a defined purpose and no internal competition.
AD COPY AND EXTENSION REBUILD
Rewrote all ad copy to achieve tight keyword-to-ad-to-landing-page alignment — the core driver of the "Ad Relevance" and "Expected CTR" Quality Score components. Rebuilt all extensions: new callouts for the free consultation and end-to-end coordination, sitelinks pointing to specific condition pages, structured snippets listing actual procedures. Average Quality Score moved from 5.74 to 7.9 over six weeks.
SMART BIDDING MIGRATION WITH CONTROLLED TCPA
Once structural fixes were in place and conversion data had accumulated to sufficient volume — a minimum of 30 conversions per campaign in a 30-day window — migrated from manual CPC to Target CPA smart bidding. Set the initial tCPA slightly above the then-current blended CPA to give the algorithm room to learn without triggering under-delivery. Stepped it down gradually over four weeks as the algorithm stabilised.
NEGATIVE KEYWORD EXPANSION
Committed to weekly search terms review for the first two months post-rebuild. Added over 180 negative keywords across the account — irrelevant navigational queries, competitor branded terms, symptom-only searches with no treatment-seeking intent, and job-related queries that were triggering on our medical keywords. This work is not a one-time event. It is a continuous hygiene practice that has a compounding positive effect on traffic quality over time.
WHEN THE LEADS CAME IN, A DIFFERENT SET OF PROBLEMS APPEARED.
Once the ads started working, it became clear that the ads account was only one part of the system. The lead comes in, the form gets submitted, the CPA looks clean in the dashboard — and then you find out what actually happens next.
I started sitting with the patient coordinators regularly. Not to report back to management — I wanted to understand the pipeline from their side, because I was optimising based on form submissions and had no visibility into what happened after a lead landed.
PROBLEM ONE: RESPONSE TIME
When a patient in Nigeria submits a form at 11pm their time — which is not uncommon, this is when people research things they are frightened to research during the day — they were not always receiving a response until the following morning UAE time. That is a gap of several hours for someone in a state of urgency making a high-stakes decision. We established a response time standard. Every lead receives an acknowledgement within a defined window, even if the full consultation call can't happen immediately. The missed lead rate dropped after this change alone.
PROBLEM TWO: MESSAGING INSTEAD OF CALLING
The coordinators were primarily reaching out via WhatsApp message rather than calling. I understood why — messaging is easier to manage when you are handling multiple cases simultaneously. But for a patient making a potentially life-changing healthcare decision, a message from an unknown number feels transactional. A call from a person who introduces themselves and asks how they can help feels entirely different. We shifted the first-contact protocol to calls, with follow-up documentation by message.
PROBLEM THREE: PATIENTS GOING QUIET AFTER FIRST CONTACT
A consistent pattern: patients would engage initially, seem genuinely interested, and then go quiet. The answer came up repeatedly: the patient had been asked to share their medical reports so the in-house doctor could review the case — and they either didn't have the reports organised, didn't know how to send them digitally, or were overwhelmed by the request and withdrew entirely.
We restructured the intake sequence. Instead of asking for documents upfront, the coordinator now leads with the consultation — a general conversation with the doctor, no documents required initially. The patient gets to speak with someone who understands their condition and makes them feel heard. The document request comes after that conversation, once trust is established. The drop-off rate after first contact fell significantly.
None of these were advertising problems. All of them were directly affecting the real return on our ad spend. I would not have known about any of them if I hadn't walked into the room and asked.
BUILDING THE TRACKING LAYER FOR THE SALES PIPELINE.
I could fix the ads. I could fix the landing pages. I could sit with the coordinators and help redesign how they engaged with leads. But none of that learning was being captured anywhere in a form that persisted. Every conversation about lead quality happened verbally and then dissolved. The following week, decisions were being made on memory and gut feel rather than patterns visible across actual data.
PHASE ONE: THE SPREADSHEET
I built a shared tracking sheet that every coordinator updated when a new lead came in. Date of submission. Source campaign. Country of origin. Condition type. Current status. Outcome notes. Simple. Not glamorous. But for the first time we had a record we could look at and ask real questions of.
The patterns that emerged from even four weeks of consistent tracking were immediate and actionable. Leads from Nigeria with haematology-related queries had significantly higher follow-through rates than leads from other sources. Leads generated by broadly-worded, high-funnel ad groups were producing form submissions but very low consultation conversion rates. A form submission from someone who searched "medical tourism" is not the same as a form submission from someone who searched "bone marrow transplant cost India from Nigeria." Both show as conversions in Google Ads. Only the data downstream tells you which one is real intent.
PHASE TWO: ODOO CRM
The spreadsheet worked for its purpose. But as lead volume increased, its limits became obvious. Multiple coordinators updating it simultaneously caused conflicts. There was no way to assign ownership of a lead to a specific coordinator with accountability. No automated follow-up reminders. No pipeline view.
We moved to Odoo CRM. I scoped the pipeline requirements, defined the stage logic, and managed the implementation through a freelance developer. The pipeline was set up to mirror the actual patient journey: New Lead → Contacted → Consultation Booked → Doctor Reviewed → Case Forwarded to Hospital → Quote Received → Travel Arranged → Converted. Each stage has a clear definition. Each lead has an owner and a timestamp. Follow-up tasks are logged in the system, not kept in someone's head or a WhatsApp reminder.
The team is actively using it. The data coming out of it is already changing how I read the Google Ads account. I can now see not just how many leads a campaign generated, but where in the pipeline those leads are sitting. That is a completely different level of insight than form submission counts in a dashboard.
$29 per form submission across 569 total conversions sounds clean. But when the spreadsheet started showing us that not all of those submissions converted to consultations at the same rate — and that the rate varied significantly by source campaign, country, and condition type — the real question became: what is our cost per genuinely qualified case? That number is higher than $29. It's also the number that actually drives the business. You can only calculate it when you have a tracking layer that goes past the confirmation page.
WHERE IT STANDS. RESULTS ACROSS ALL THREE PHASES.
Here is where everything stands.
- No website, no tracking infrastructure
- Quality Score: 5.74 average
- Mobile LCP: 5.4s — rated Poor
- 68% impression share lost to quality
- All 6 campaigns budget-limited
- Keyword cannibalization, 3 campaigns
- CPA: $120+ on early campaigns
- No post-submission lead tracking
- Coordinator process unstructured
- No CRM, no pipeline visibility
- Full 8-page site, GA4, GSC, GTM live
- Quality Score: 7.9 average
- Mobile LCP: 1.9s — rated Good
- 11% impression share lost to quality
- 0 of 5 active campaigns budget-limited
- Clean intent-based structure, no duplication
- Blended CPA: $29 · 569 total conversions
- Odoo CRM tracking full patient pipeline
- Intake process rebuilt around patient behaviour
- Real cost-per-qualified-lead now measurable
The $29 figure is cost per form submission, not cost per patient. Total spend across the full campaign period was $16,465, generating 569 submissions — that maths holds. But a form submission is the entry point to the pipeline. The real cost per consultation, and per case forwarded to a hospital, are higher — and we can now calculate both for the first time through the CRM. That work is ongoing.
What the number doesn't capture is the full scope of what was built: from a patient finding the company through a Google search late at night, to submitting a form on a mobile phone, to a consultation with a doctor, to their travel and hospital appointment being coordinated end to end. That pipeline didn't exist when I joined. It does now.