ROBOTOMATED.
975ROBOTS//$103BMARKET

15 Questions to Ask a Robot Salesperson Before Your Demo

Robotomated Editorial|Updated April 1, 2026|8 min readProfessional
Share:

Quick Answer: The best questions for robot salespeople focus on real-world deployment data, not features. Ask about average fleet performance (not best-case demos), similar customer references you can call, total cost including integration, and what happens when things go wrong. These questions separate vendors who solve problems from those who sell hardware.

Before the Demo: Set the Right Frame

Robot sales demos are carefully choreographed to show the product at its best. Controlled environments, optimal conditions, and pre-programmed scenarios create an impressive show that may not reflect your operational reality.

Your job is to move the conversation from "look what it can do" to "prove it works in conditions like mine." These 15 questions do exactly that.

Questions About Track Record (1-4)

1. How many robots do you have deployed at customer sites today, and what is the fleet-wide average uptime?

What you learn: Total deployment count indicates maturity. Fleet-wide average uptime reveals reliability. If they share only peak uptime from their best site, ask again for the fleet average.

What good looks like: Over 200 units deployed, 92%+ fleet-wide uptime with data to back it up.

Warning sign: Vague answers like "hundreds of units globally" without specifics, or sharing uptime from a single showcase customer.

2. Can you name three customers in my industry with my facility profile that I can contact directly?

What you learn: Whether the vendor has proven experience with operations like yours. Generic references from different industries and scales are much less valuable.

What good looks like: Immediate offer of relevant references with names and contact details. The best vendors are proud of their reference customers.

Warning sign: Hesitation, NDAs preventing all customer disclosure, or references only from dissimilar operations.

3. What is the longest a customer has continuously operated your robots, and what does their maintenance history look like?

What you learn: Long-running deployments prove durability. Year 1 performance is easy — Year 3+ performance separates reliable technology from fragile technology.

What good looks like: Customers running for 2+ years with documented maintenance data showing consistent or improving performance.

Warning sign: No deployments older than 18 months, or unwillingness to discuss long-term maintenance data.

4. Tell me about a deployment that failed or significantly underperformed. What happened?

What you learn: Self-awareness and honesty. Every vendor has challenging deployments. How they discuss failure reveals their maturity and learning culture.

What good looks like: A candid description of what went wrong, what they learned, and what they changed. This question often produces the most valuable information in the entire conversation.

Warning sign: Claims that no deployment has ever had significant issues.

Questions About Real Performance (5-8)

5. What throughput should I expect during the first 90 days versus steady-state operation?

What you learn: Realistic expectations for the ramp-up period. Performance in Month 1 is always lower than Month 6. Vendors who acknowledge this are setting you up for success.

What good looks like: A specific ramp curve: "Expect 60-70% of target throughput in weeks 1-4, 80-85% in weeks 5-8, and 90-95% by week 12."

Warning sign: Promises of full performance from Day 1, or no ramp-up timeline at all.

6. What happens when the robot encounters something it cannot handle?

What you learn: Exception handling is where deployments succeed or fail. Every warehouse has irregular items, fallen product, damaged packaging, and unexpected obstacles.

What good looks like: Clear protocol: robot stops, alerts operator, continues with remaining tasks. Data on exception frequency at existing sites. Remote intervention capability for complex situations.

Warning sign: "Our robots handle everything" — no robot handles everything.

7. What is the actual picks-per-hour your customers achieve, averaged across all sites?

What you learn: The difference between demo performance and field performance. Ask for the average across all deployments, not the best-performing site.

What good looks like: Specific ranges with context: "Our customers average 220-280 picks per hour per zone, with top performers reaching 350."

Warning sign: Only sharing the top number, or pivoting to theoretical capacity instead of actual measured performance.

8. How does performance change during peak season when volumes double or triple?

What you learn: Scalability under stress. Peak season is when you need robots most — and when they face the most challenging conditions (crowded aisles, temporary workers, higher volumes).

What good looks like: Documented peak season performance data from existing customers, with specific examples of how they scaled to meet demand.

Questions About True Cost (9-11)

9. Walk me through every cost I will incur in Year 1 and Year 2, including items not in your proposal.

What you learn: Whether the proposal captures total cost or just hardware. Force the salesperson to enumerate integration, infrastructure, training, software, maintenance, and any other costs not in the headline number.

What good looks like: A detailed line-item breakdown that matches the categories in a proper TCO analysis.

Warning sign: Defensiveness about hidden costs, or "we can discuss that later."

10. What will I spend on integration with my specific WMS, and how long will it take?

What you learn: Integration is the most common source of budget overruns. Pin down the specific cost and timeline for your WMS platform.

What good looks like: Experience with your specific WMS, a fixed-price integration quote, and a timeline with milestones.

Warning sign: "Integration is straightforward" without specifics about your system, or time-and-materials pricing with no cap.

11. What are the software licensing fees in Year 3, and can they increase?

What you learn: Recurring software costs and whether they escalate. Some vendors use low initial licensing to win the deal, then increase fees significantly in later years.

What good looks like: Published pricing with contractual caps on annual increases (typically 3-5% per year).

Warning sign: Pricing "subject to change" or significant escalation clauses buried in the contract.

Questions About Support and Partnership (12-15)

12. When my robots stop working at 2 AM on a Saturday, what exactly happens?

What you learn: The reality of after-hours support. Get specific: Who answers? What is the response time? Is it remote only, or can someone be on-site?

What good looks like: 24/7 support with defined response SLAs, remote diagnostic capability, and on-site response within 4 hours for critical issues during business hours.

Warning sign: Support only during business hours, or vague commitments without SLAs.

13. How do you measure and share my system's performance with me?

What you learn: Whether you will have visibility into your investment's performance. Data-driven vendors provide dashboards, regular reporting, and proactive optimization recommendations.

What good looks like: Real-time dashboards, weekly automated reports, and quarterly business reviews with optimization recommendations.

Warning sign: No standard reporting, or reports only available upon request.

14. What does the contract exit look like if this does not work out?

What you learn: How confident the vendor is in their solution. Vendors who believe in their product offer reasonable exit terms. Vendors who rely on lock-in often deliver poor outcomes.

What good looks like: Termination for cause with 30-60 day cure period, termination for convenience with 90-180 day notice, and reasonable equipment return provisions.

Warning sign: Multi-year lock-in with no exit provisions, or punitive termination fees.

15. If I buy today, what do I get that I will not get in six months?

What you learn: Whether the urgency to close is genuine or manufactured. This question reveals time-limited incentives, upcoming price increases, or capacity constraints — and also tests the salesperson's honesty.

What good looks like: Honest answer about pricing, capacity, or roadmap considerations — or an admission that there is no time pressure and you should take the time you need.

Warning sign: High-pressure tactics, artificial deadlines, or "this price is only good today."

After the Conversation

Score each vendor's responses and compare them side by side. The vendors who answer every question with data, specificity, and honesty are the ones most likely to deliver on their promises.

Continue your evaluation with the Robot Finder to compare specifications, and model the financial impact with our TCO Calculator.

Was this helpful?
R

Robotomated Editorial

The Robotomated editorial team tracks robotics technology across industries — reviews, deployment data, and ROI analysis for operations leaders.

Stay in the loop

Get weekly robotics insights, new reviews, and the best deals.