Automated lead scoring workflow using custom fields and triggers increased conversion rate by 18%

I want to share our implementation of automated lead scoring in Zendesk Sell 2021 that eliminated manual scoring and improved our lead conversion rates by 34%. Before automation, our sales team was manually evaluating leads based on subjective criteria, which led to inconsistent prioritization and missed opportunities.

We built a scoring workflow using custom fields and automated triggers that evaluates leads across three dimensions: demographic fit (company size, industry, role), engagement level (email opens, website visits, content downloads), and buying signals (pricing page views, demo requests, competitor comparison research). Each dimension contributes points to a total lead score that updates in real-time as new information comes in.

The workflow automatically routes high-scoring leads (75+ points) to senior sales reps, medium-scoring leads (50-74 points) to standard rep queues, and low-scoring leads (<50 points) to nurture campaigns. This has significantly reduced our response time to hot leads and ensures our best reps are working the highest-value opportunities.

We embedded the demographic rules directly in custom fields with dropdown values and associated point values. For example, Company Size field has options: 1-50 employees (5 points), 51-200 (10 points), 201-1000 (15 points), 1000+ (20 points). This keeps everything within Zendesk Sell and avoids external dependencies. The tradeoff is that rule changes require updating field configurations, but we only adjust scoring criteria quarterly so the maintenance burden is manageable.

What was your process for determining the point values and thresholds? Did you analyze historical data to identify which factors correlated with closed deals, or start with educated guesses and refine over time? We’re struggling with setting our initial scoring parameters.

The 34% conversion improvement is impressive. How long did it take to see results after implementation? And did you face any resistance from sales reps who were used to choosing their own leads rather than having them assigned based on automated scoring?

Good question. We integrated Zendesk Sell with our marketing automation platform (Mailchimp) using Zapier. Email engagement data flows into custom fields in Zendesk Sell: Email_Opens_30Days, Content_Downloads_30Days, and Demo_Requests. Website visit data comes from our analytics platform via API. The scoring triggers in Zendesk Sell read these fields and calculate the total score using a weighted formula:

leadScore = (demographic * 0.3) + (engagement * 0.4) + (buyingSignals * 0.3)

The weights can be adjusted based on what correlates most strongly with closed deals in your business.

For the demographic scoring component, did you use a lookup table or hardcoded values? We’re implementing something similar and trying to decide whether to maintain scoring rules in a separate database that the workflow queries, or embed the rules directly in the Zendesk Sell workflow logic. The former is more flexible but adds integration complexity.

This is great! Can you share more details about the technical implementation? Specifically, how did you capture engagement data like email opens and website visits? Did you integrate with marketing automation tools, or is this all native Zendesk Sell functionality?

Let me provide a comprehensive overview of our implementation that addresses all these questions and might help others build similar workflows:

Automated Lead Scoring Implementation:

Our lead scoring system was designed to replace subjective manual evaluation with data-driven prioritization. The core architecture consists of three layers: data collection, scoring calculation, and automated routing.

Data Collection Layer: We created eight custom fields in Zendesk Sell to capture scoring inputs:

Demographic Fields:

  • Company_Size (dropdown: 1-50, 51-200, 201-1000, 1000+)
  • Industry (dropdown: SaaS, Financial Services, Healthcare, Manufacturing, Other)
  • Job_Title (dropdown: C-Level, VP, Director, Manager, Individual Contributor)

Engagement Fields:

  • Email_Opens_30Days (number field, auto-populated via Zapier from Mailchimp)
  • Content_Downloads_30Days (number field, auto-populated from marketing automation)
  • Website_Visits_30Days (number field, auto-populated from Google Analytics API)

Buying Signal Fields:

  • Pricing_Page_Views (number field, tracked via UTM parameters)
  • Demo_Request_Count (number field, incremented by form submission trigger)

The engagement and buying signal fields update automatically through integrations. Demographic fields are populated either from enrichment services during lead capture or manually by SDRs during qualification calls.

Scoring Calculation: We use a workflow automation in Zendesk Sell that triggers on field updates. Here’s the scoring logic:

Demographic Score (max 30 points):


Company Size: 1-50=5pts, 51-200=10pts, 201-1000=15pts, 1000+=20pts
Industry: SaaS=10pts, Financial=8pts, Healthcare=6pts, Other=3pts
Job Title: C-Level=10pts, VP=8pts, Director=6pts, Manager=4pts, IC=2pts

Engagement Score (max 40 points):


Email Opens: 1-3=5pts, 4-7=10pts, 8-15=15pts, 16+=20pts
Content Downloads: 1=5pts, 2=10pts, 3+=15pts
Website Visits: 1-2=2pts, 3-5=5pts, 6-10=8pts, 11+=10pts

Buying Signals Score (max 30 points):


Pricing Page Views: 1=10pts, 2=15pts, 3+=20pts
Demo Requests: 1=15pts, 2+=20pts

The total Lead_Score field is calculated automatically whenever any input field changes. We implemented this using a custom function trigger that runs the calculation and updates the score field.

Custom Field Triggers: The automation workflow has three main triggers:

  1. Score Calculation Trigger - Runs whenever any scoring input field changes, recalculates total score
  2. High Priority Assignment Trigger - When Lead_Score >= 75, assigns to senior rep queue and creates high-priority task
  3. Nurture Routing Trigger - When Lead_Score < 50, removes from sales queue and adds to marketing nurture campaign

Here’s the pseudocode for the assignment trigger:


// Trigger: On Lead Score Update
IF Lead_Score >= 75 THEN
  Assign to Senior_Rep_Queue
  Create Task: "High-value lead - contact within 2 hours"
  Send Slack notification to sales channel
ELSE IF Lead_Score >= 50 AND Lead_Score < 75 THEN
  Assign to Standard_Rep_Queue
  Create Task: "Qualified lead - contact within 24 hours"
ELSE IF Lead_Score < 50 THEN
  Remove from sales queues
  Add to Nurture_Campaign
  Set Follow_Up_Date to +30 days
END IF

Conversion Tracking: To measure effectiveness, we track three metrics:

  1. Lead-to-Opportunity conversion rate by score band
  2. Opportunity-to-Close conversion rate by initial lead score
  3. Average time to first contact by score band

Before automation, our overall lead-to-opportunity conversion was 18%. After implementing scored routing, it increased to 24% (34% improvement). More importantly, high-scoring leads (75+) now convert at 42%, while low-scoring leads convert at only 8%. This validates that our scoring model accurately predicts lead quality.

Implementation Timeline and Results: Week 1-2: Historical data analysis to determine scoring factors and weights. We analyzed 18 months of closed deals to identify which demographic and behavioral factors correlated most strongly with wins.

Week 3-4: Custom field creation and integration setup. Built Zapier connections to pull engagement data from Mailchimp and Google Analytics.

Week 5-6: Workflow automation development and testing in sandbox environment. We processed 500 historical leads through the scoring model to validate accuracy.

Week 7-8: Pilot launch with one sales team (5 reps). Monitored results and gathered feedback.

Week 9: Full rollout to entire sales organization (25 reps).

Results became visible within 30 days of full rollout. The 34% conversion improvement was measured over the following quarter compared to the previous quarter’s baseline.

Change Management: Sales rep adoption was our biggest challenge. Many reps were skeptical of algorithmic lead assignment and preferred choosing their own leads. We addressed this through:

  1. Transparency - Showed reps exactly how scores were calculated and let them provide input on weighting factors
  2. Override capability - Allowed reps to manually adjust scores if they had information the system didn’t capture
  3. Performance data - Shared weekly reports showing that reps working high-scored leads had 3x higher close rates
  4. Incentive alignment - Adjusted compensation structure to reward conversion rates, not just lead volume

By month three, 90% of reps were following the scored lead routing without overrides. The remaining 10% were mostly senior reps who had established their own qualification methods - we allowed them flexibility since they were already high performers.

Lessons Learned:

  1. Start simple - We initially tried to score 15 different factors and the model was too complex. Simplifying to 8 key factors improved accuracy.
  2. Regular recalibration - We review scoring weights quarterly based on closed deal analysis. Market conditions change and your scoring model should adapt.
  3. Integration reliability - Our biggest technical issues were with API integrations timing out or sending stale data. Build monitoring and alerting for data flow problems.
  4. Score decay - We added a time decay factor after realizing that engagement from 60+ days ago shouldn’t count as much as recent engagement. Now engagement scores decay 10% per week.

This implementation has become a core part of our revenue operations and we’re now expanding it to score existing opportunities for upsell/cross-sell prioritization using similar methodology.