Business
VCs who want better outcomes should use data to reduce founder team risk
VCs expect the companies they invest in to use data to improve their decision-making. So why aren’t they doing that when evaluating startup teams?
Sure, venture capital is a people business, and the power of gut feeling is real. But using an objective, data-backed process to evaluate teams — the same way we do when evaluating financial KPIs, product, timing and market opportunities — will help us make better investment decisions, avoid costly mistakes and discover opportunities we might have otherwise overlooked.
An objective assessment process will also help investors break free from patterns and back someone other than a white male for a change. Is looking at how we have always done things the best way to build for the future?
Sixty percent of startups fail because of problems with the team. Instinct matters, but a team is too big a risk to leave to intuition. I will use myself as an example. I have founded two companies. I know what it takes to build a company and to achieve a successful exit. I like to think I can sense when someone has that special something and when a team has chemistry. But I am human. I am limited by bias and thought patterns; data is not.
You can (and should) take a scientific approach to evaluating a startup team. A “strong” team isn’t a vague concept — extensive research confirms what it takes to execute a vision. Despite what people expect, soft skills can be measured. VCVolt is a computerized selection model that analyzes the performance of companies and founding teams developed by Eva de Mol, Ph.D., my partner at CapitalT.
We use it to inform every investment decision we make and to demystify a common hurdle to entrepreneurial success. (The technology also evaluates the company, market opportunity, timing and other factors, but since most investors aren’t taking a structured, data-backed approach to analyzing teams, let’s focus on that.)
VCVolt allows us to reduce team risk early on in the selection and due diligence process, thereby reducing confirmation bias and fail rates, discovering more winning teams and driving higher returns.
I will keep this story brief for privacy reasons, but you will get the point. While testing the model, we advised another VC firm not to move forward with an investment based on the model’s findings. The firm moved forward anyway because they were in love with the deal, and everything the model predicted transpired. It was a big loss for the investors, and a reminder that hunch and gut feeling can be wrong — or at least blind you to some serious risk factors.
The platform uses a validated model that is based on more than five years of scientific research, data from more than 1,000 companies and input from world-class experts and scientists. Its predictive validity is noted in top-tier scientific journals and other publications, including Harvard Business Review. By asking the right questions — science-based questions validated by more than 80,000 datapoints — the platform analyzes the likelihood that a team will succeed. It considers:
-
Entertainment6 days ago
WordPress.org’s login page demands you pledge loyalty to pineapple pizza
-
Entertainment6 days ago
‘Mufasa: The Lion King’ review: Can Barry Jenkins break the Disney machine?
-
Entertainment5 days ago
OpenAI’s plan to make ChatGPT the ‘everything app’ has never been more clear
-
Entertainment4 days ago
‘The Last Showgirl’ review: Pamela Anderson leads a shattering ensemble as an aging burlesque entertainer
-
Entertainment5 days ago
How to watch NFL Christmas Gameday and Beyoncé halftime
-
Entertainment4 days ago
Polyamorous influencer breakups: What happens when hypervisible relationships end
-
Entertainment3 days ago
‘The Room Next Door’ review: Tilda Swinton and Julianne Moore are magnificent
-
Entertainment3 days ago
CES 2025 preview: What to expect