Video: First Principles: A Conversation with Parker Conrad & Cursor’s Michael Truell | Duration: 2108s | Summary: First Principles: A Conversation with Parker Conrad & Cursor’s Michael Truell | Chapters: Introductions and Greetings (106.57s), AI for Knowledge Work (185.76001s), BugBot Launch Success (358.58002s), AI-Driven Code Migrations (520.675s), Growth and Scaling (650.77496s), Future of AI (798.285s), Rippling for Startups (1525.0499s)
Transcript for "First Principles: A Conversation with Parker Conrad & Cursor’s Michael Truell": Alright. Yeah. Thanks for coming. It's great to see you. Great to see you too. And, a little bit surreal. Big fan of your story. Oh, thank you. Yeah. I mean, obviously, I I think, like, you're a little bit of a celebrity here. I heard there were, like, many of the engineers at Rippling recognized you and we're, like, oh my gosh. That's the Cursor founder. So we're huge fans of Cursor at Rippling and obviously used it extensively. It's been a huge help to us. Well, honored to hear that, and we are massive fans of Rippling. It is actually, I think, maybe the first piece of software that we bought. And Oh, that's great. We're very happy users of it. That's awesome. We went a really long time without hiring anyone, to run payroll. I was actually a payroll admin for a long time. I'm still a payroll admin at Ripley. I've heard. Yeah. Yeah. Yeah. Okay. Cut. I think that's a rip, bro. Yeah. Yeah. We got we got all the b roll we need here. So, actually, I'm curious. I I know that, you know, there are a lot of companies, you know, obviously, Cursor's made a huge impact in in the coding space. And I I see a ton of companies that are saying they wanna be, you know, Cursor for x. Mhmm. And I'm curious if you have a view on, like, what that or, like, what that means to you? And and in particular, like, are there are there elements, like, particularly from a user experience perspective on how sort of AI agents interact with sort of other software that you guy that you think you really nailed that are gonna endure as sort of standards? And and also maybe are there things that, you know, you think need to change? Kershner kind of came out of a thought exercise, where the way Kershner started was, my cofounders and I, we had worked on programming for a long time. We'd also worked on AI for a while too. And, there were a couple of moments in 2021 that got us really excited about working on AI, not in academia, but actually working, on AI in the real world. And we were very interested in a particular shape of company, which was you pick an area of knowledge work, and then you build, the best pane of glass to do that knowledge work in as AI gets better. You get lots of people to use your thing, and then, you get this flywheel going, and you can see where AI is helping people, where AI is not helping people, when AI is not helping people, then you can see how humans are correcting it. And you can use that to actually feed into the underlying tech, like, make the models better and then push the puck forward, on the technology, which then, you know, gets this flywheel going if you're then gonna reshape the UI. But that, like, formula, like, pick an area of knowledge work, build the, you know, AI forward product for it, and then use the the data and distribution you get from that to then improve the underlying tech. We're actually interested in that before we settled on programming in particular. And we actually had this false start where we worked on, we saw programming. We thought that programming was gonna be too competitive of a space to enter. And, we thought about, or we actually worked on, building, a version of cursor for mechanical engineers for a bit and training three d autocomplete models and, helping people who deal with SolidWorks and CAD. But that kind of formula for a company, I I don't think is programming specific. And I think that there will be versions of that that exist in all sorts of different areas of knowledge work. I think programming is, you know, one of the first ones to happen because the tech is especially good for programming. And so that, you know, that form of Cursor for x, you know, I think I think will exist and, just like studying your how your users interact with the AI and using that to improve the underlying system, I guess. Yeah. Basically, training custom models from that, and then, yeah, architecting the UI around, the AI as it gets better. And then the the patterns you know, two of the big ways that people use Cursor, one is this, like, autocomplete pattern, and then another one is this agent pattern of you delegating work to a bot. I think both of those work in lots of other fields. And, you know, autocomplete can be especially useful for programmers because it's often very predictable what they're gonna do next. Because if you're in the middle of, like, a three hour long refactor, the stuff that you've done in the past is very indicative of what you're gonna do in the future. And then, yeah, this form factor of, like, you have you are basically talking with this artifact that you're building with the AI agent style. I I think that that could be happening for for many more areas of knowledge work. And I think that there's a bunch of product overhang there where people just like you have to kind of nail the details, and some of those details have to do with the robustness of the models and their accuracy and some of the details have to do with speed too. I'd love to hear a little bit about your BugBot launch and, like, how that's going. Someone someone on your team told us that we were actually the largest bug users by usage. I don't know if that's still true. I think that's no longer true. But it was true at one point. You guys were awesome awesome, early customers. I don't know what that says about us. Maybe we have, like, a lot of bugs. Like, I don't know. It means you're pushing bug quickly. Yeah. You know, I was talking about it with Albert. It sounds really cool. It sounds like, basically, you take an interpreted language and catch a lot of things that you would otherwise only see at run time, much earlier in the process when it's much less expensive to catch those bugs. The way bug bot works is it's actually pretty simple. Every PR that's created in your org, we're gonna spend time in the background trying to verify its correctness and find bugs, using AI. And so it uses a mix of Kurzinger's code based understanding, a mix of different models, and, also, increasingly, we're gonna tend toward a world where you can actually, in the background, be be running the code and trying to verify it. But, it has it has really taken off. It it had, like, deep PMF internally, much more than we expected. And then, it has been awesome to see it grow. And some things that surprised us were just how quickly at some companies it's become actually required to now have this AI review step before you merge code in. I think it's required here. Okay. Yeah. Yeah. Awesome. And another thing that surprised us is the false positive rate. So we've done a bunch of work on both spending a bunch of compute on helping you find the gnarliest bugs, but then also not bothering developers. And so across all customers, the FPR, so, like, the set of bugs that are flagged as bugs that someone doesn't actually end up fixing, they kind of just ignore it, maybe just because it's a styling error or maybe it's something that's not actually an issue, that's, around one half of the things that get reported. So one half of the things are real bugs that end up getting fixed by the programmer. Wow. And so I've done a bunch of work on that and are really excited about that. I'm curious, like, one of the things, probably, like, my least favorite, parts of software development to be involved in at Rippling are migrations. Mhmm. And, inevitably, there are these things that, like, you start a company, there are decisions that you make that are maybe practical in the early days that as you grow no longer work and you've gotta kind of, like, rebuild the foundations of the system. Like, what are the specific things that that that we need or that I don't know if you could share what you guys are working on that will sort of unlock this. Like, what are one of the migrations that you guys are working on? Is it moving tech stacks? Or is it, doing some big change to, you know, data schemas? Well, I I was thinking mostly, like, big changes to data schemas, like, the sort of, you know, when the if the architecture of the system needs to change Yeah. The the data models need to be done differently. Yeah. I think migrations are really hard. They're also, like, the least fun, which is why. Yeah. Yeah. I actually think that there's probably things one could build with, like, the state of the models right now, for migrations, if you tailored it to a specific tech stack. We have, unfortunately, decided to be a really horizontal tool. And so I think that there are a bunch of opportunities to pick, like, a particular vertical, particular tech stack specialized there, and build, like, a pointy even better product, and we just have said no to that. So I think that there text, you mean, like like, Python or, like, you know, like a like a language or Yeah. Like, if you were building more, like Python, like, how specific use, like, an outdated example, like, if you were gonna do, like, Python two to Python three, I think that you could do lots of specific hacks for that, plus the current state of the models. You'd probably make a pretty good Python two to Python three system. I think there's a couple things that are, that's needed. I think they need, more code based understanding. So they need to be able to ingest rippling's 30,000,000 lines of code, actually understand them. And not just, like, architecturally get that into the context window, but, actually understand all of that code. And some of the ways in which these models are trained, the it's there's, like, an open question about how they're gonna how we're actually gonna get to a world where you can where you can understand tens of millions of tokens. And then you need to be able to run the code with something like a migration. It's not gonna be something something more like you come out and it just works. And so computer using, I think, is also a bottleneck for some of the stuff, like being able to actually run rippling, squeaky, and then poke around and QA things, or being able to have, like, the right infrastructure integration to run all of your QA tests, like automated tests and then react to the output. But so, yeah, I think a bunch of it's in the underlying models. I think that there is a specific product you could build for that right now, but we've tried to our our product philosophy has been to try to stay very horizontal. If you look on Twitter, it's dangerous to look on Twitter. But there there are a lot of people that sort of say, look, with AI companies, the world has fundamentally changed. Mhmm. You really don't need very many employees. I'm curious, like, how you guys think about that at Cursor. Like, what do you like, actually, how how many employees are you guys today? Oh, we're a 150 people. So That's incredible. Crazy amount of revenue for for that size of employee base. Yeah. Where where do you think you'll be Feels big to us. So Yeah. Where do you think you'll be a year from now? We have hiring goals, but they have really big asterisks around them where, with the speed that we've grown, we will pull the brakes, if it feels like we're not growing properly. Yeah. That's everything from, like, culture breaking down, like, hiring too many sellers for the pipeline you have, like, the bar going down, people not being onboarded properly. And so so far, I think we've actually, grown pretty well, and it's mostly been bottlenecked around, like, how many great people we can find. But, yeah, I think that we might end the year at, you know, something like 250 people. Got it. And, how do you think about the, the size of teams? I I do think you can do a bunch more as one person across a bunch of different functions. We have a very, very small support team, relative to the the volume that we handle. I think that we're maybe 20 people in support right now Oh, wow. For many millions of users, like, every day. But then, yeah, in engineering, I think that engineers can do a lot more. But, it's so easy to underestimate how far professional software engineering is from being solved. The migration point kind of fits us home where I think that it's easy to underrate amongst people who don't program every day how different the experience of building something from scratch is from being in Rippling's 30,000,000 lines of code and having to change something and having the mass and weight of this existing bundle of logic, like this ball of mud you need to move around. And, you know, as an example, like, the New York Times, which is a customer of ours, spends, you know, maybe roughly 150,000,000 or 200,000,000 on software r and d per year. You don't think of the New York Times as a software company. So, yeah, I think that programming is really far from being solved. People can do much more, but, it's the ROI of adding the next person is actually, like, all the more all the more higher, because of that, and the scope of what we wanna do as a company is big. And so I don't actually think it will change the the hiring trajectory for us right now. At steady state, you know, a few years into the future, I am really curious about if you can have, if you can build a really consequential technology company that kind of stays capped around, you know, a 2,000 people. But we'll see. 2,000 people still still quite a lot. Yeah. But much smaller than the, like, the biggest Yeah. Than, you know, tens of thousands. Yeah. For sure. Yep. I'm curious, like, how like, one of the things I saw in in sort of other interviews that you talked about was the sort of ceiling in some of the work that you guys do being very high that there's actually enormous room to improve very dramatically over the existing way of doing things. And I think you're at 500,000,000 now in the ARR. I I don't know if you can say that. That's what's reported. That's what's reported. Got it. Yeah. I think we've, probably that over over that number. Yeah. Yeah. And that was probably, like, two weeks ago, so now it's, like, much higher or something or you know? But, like, do you guys need sales and marketing? Like, how do you think about those types of investments in your business? And, like, is there a point at which they'll be required or, you know, what like, when like, what what's your sort of take on how companies like yours should be thinking about those functions? I think sales and marketing, are are both important. I think it depends on the market when you need to layer that in and how far you can get without sales and marketing. And we're in a market where end users are incredibly important for the buying decision even within companies. People it's very hard to force on a programmer, a code editor that they don't wanna use. It's kind of like search engines where, you know, if you think that, you know, x tool is better, you can actually use that without disrupting your entire team. I think that, without disrupting your entire team. I think that probably over time that that will start to change, but that dynamic of end users matter a ton further buying decision, they have a lot of power within organizations. It's a little bit of a single player decision. Means that the highest order bid in our market is just having the best product. And so we have relentlessly kind of had responders on that for a long time. We do now have sales, and we serve large, customers. And we're actually really excited about that. And, how how large is your sales organization? Roughly 40 people right now. Oh, wow. Yeah. Yeah. What's, like, the quota annually for a rep that's selling Cursor? I don't know if I'm allowed to say that. I don't know if you're allowed to say that. It's really high. It's really high. Is it it must is it, like, $10,000,000 or something something? I mean, because, like, normally in SaaS, you think of this as being, like, like, a a million, a little over a million a year per per rep. Yeah. Yeah. Yeah. It's a a bump order of magnitude. So, I think that this is, like, a consequence of just there are really useful products to build. Yeah. Of course. And the difference between solution one and solution two, in markets like ours actually can just be really big. And, also, the ROI of the spend is just you run the numbers. It's crazy. Yeah. Like, it will be interesting to see the first Forrester report on this stuff. But, so people, they wanna they wanna buy the best thing. Like, even if you gave away the third place thing for free, it wouldn't be worth it. How do you think about the trade offs between distribution and margin? Obviously, it's like a I think that's like a big a thing that a lot of AI companies are thinking about or it's a it's a big question, I think, for the industry. And so, like, point like, with this high ceiling, my guess is, like, you you wanna sort of focus on distribution now. Like, when does that change? Like, or, you know, does that like, at what point do does do you think that needs to change? And and definitely just going back to that previous point, do you wanna underscore that I I think a lot of this is just, like, the physics of the market we're in, where, like, now there is, a really high ceiling. There are really, really useful products to build. And so, I don't think it's, you know, far from purely created by us phenomena. There's there's this book called, I'm Feeling Lucky, and it was written by, one of the first product marketers at Google. It kinda gives you a window into Google's execution in the early days, and it sounds very chaotic and messy. And, I think that in in that market too, the physics we're just focusing on having the best product was the highest order bid for the company. But, how do we think about the distribution side of things now? I think that we are in a market where there's a lot of overlap between what end users want to down market and what end users want to up market. And so for a long time, we just focused on, start ups, mid market companies. We liked focusing on those folks because that's kind of a crucible in which you can test the product. It really pushes you to have the best product because the way the procurement process works down market is just picking kind of the best thing. But, we got lots of inbound from companies that we had really respected for a long time. And, we went and, tested the product within some larger customers, spent a bunch of time talking with larger customers, and it seemed clear to us that there were gonna be economies of scale here. And one company was gonna take the entire market because, there's enough overlap between end users down market and that market. And so that's when we started building out, more of the sales and post sales side of things to be able to serve larger customers. And then the way this feeds into margin, the thing we're optimizing for is the long term value of the company over the course of many decades. And that means, taking the market, having lots of paid engage usage, having, you know, serving as many customers as we can. But having a healthy margin on teams and enterprises is important for them feeding back into research and development, and to make sure that you can build the best product, which is the the most important thing in this market. And so our approach here is individual developers, we want that to be sustainable, but we actually don't think of that as the main way we're going to then, you know, fund research and development. It's teams and enterprises, which is already over half our our revenue, actually. I guess, like, selfishly, like, when when do you think that starts to flip for you guys? Like, when when should we expect a big cursor renewal bill or or, or, like, you you know, or is it just that, you know, the price of the models is gonna come down over time and that's that's the way that you start to build up margin in those areas? I think we have seen the price of intelligence come down over time. There's been this, like, countervailing force where then people have started using the AI much, much, much more Yeah. Over time. Yeah. We think it's important, already, we to be clear, we think it's important to maintain when you're serving larger customers, you know, a sustainable margin over that that can then feed into building the best product possible. So I think that the the big shift to watch for is how fast does the use of compute amongst, like, an individual developer grow relative to, like, the token cost coming down. And I do think that there's a chance we're headed for a world where in the next year, yes, token cost kind of continue to go down predictably, but, it gets easier and easier to do more things in parallel and to, you know, as one person orchestrate more and more compute. And so you can, I think, from that, like, start to see, these bills go up? And I think rightly so, actually. Like, I think that that's actually, the compute cost will go up over over time. I think probably over the next year. Yeah. Wow. And not on a per token basis, but just because you will be able to do so much more as one person. Yeah. So on a per person basis. But to be clear, like, dollar per line of code written will continue to go down. Yeah. Of course. That makes sense. So one of the most annoying questions I always got from investors was, like, what the mode is. And my view was always, like, look, if you're a start up, the mode keeps you out of the market. So, like, you know, it's a good thing there's not a mode today, and it's something that kinda comes with time. Do you have a point of view? I mean, I know it's early, but, like, over time, where might that come from in your market? Like, where how might that develop over time? I think that there are two two big ones to call out. So one is, a deep technical moat, that especially comes from product data too. And so here, I think the search analogy comes through again where, you know, the story of search was there were dozens of people on the starting line. It was this very visible large market. A few people start to pull away because they build a better product. Then you get lots of people using your thing, and then that is an important input for them making the product better because you can see what search results are people clicking on and what are they bouncing back from that helps you make the the search ranking better. And I think you have that many times over in our market where, part of the purpose of the product is to help you write code faster. We can see where AI is helping them, where AI is not helping them, and then use that to make the product better. You can then, make fantastic tab models that you can only make if you have lots of distribution. And so that's that's already played out on the tab front. Actually, we're on, like, our fifth or sixth generation model there, and a really important input to that is product data and having lots of people, using Cursor. And I think that that's gonna continue to play out too in kind of these new areas of, you know, more the agent side of things. And already there, the cursor agent is like this chimera of both the API models, which we're excited about and excited to see them continue to get better, and then also custom models working with them around them. Another one, I do think is more normal team level, like, software dynamics where right now this is, you know, which code editor you use is a little bit of a single player decision. I think it'll start to become more team level, and it will start to become more and more a decision about how your team works. And we're already starting to see that in the product where, you know, now you can go beyond just running, you know, a one on one agent locally in Cursor. So, basically, spinning off multiple agents in parallel in the cloud, and, you know, spin off, agents from your team communication system and also run agents on all issues in your issue tracker, and that starts to become much more of a team level decision. Then I think over time what's gonna happen is, you know, in the far future of, like, 2030, I don't think humans are gonna be, like, staring at TypeScript and Rust and Go. I think that things are gonna be a little bit higher level and so the stores, I think, is gonna change too. Someone who was at the Sun Valley conference recently told me that Sam from OpenAI was talking to the group there and said that he thought that anthropic was not the real challenger to OpenAI that you guys were. And I'm curious I don't know if that that had made it back to you, but what what, what's your take on that? Well, honored, honored that he even knows about us. Not so you need help. I'm sure it helps with your fundraising. Honored, don't know the the details of that, and, we're excited to use kind of, the the best of, what all the API providers have to offer. And I actually think that it's a lot of people talk about, you know, which model's best, one model this, one model that. There's already different model skews skews that are developing, that we use in different ways in Cursor. So we use really small fast models in parts of the product behind the scenes. We use kind of the, Sonnet, GPT four type sort of profile of speed and cost too. That's you know, when people think about the, you know, the coding agent that they're using in the foreground, it's that model plus a bunch of custom models around it. And then we also use the slow expensive models in the background, especially for, more of this world of you're running multiple things in parallel. You're less working kind of one on one synchronously with the agent. And I think more of those skews of, you know, amongst the, like, n dimensional matrix of, like, speed and intelligence and long context stability and, like, computer using ability, there will be multiple models that we will use. And so OpenAI is a really important partner, of ours there, and I think that there are other companies that we're we continue to be really excited to partner with there. Our end goal is to, automate coding and, make it so that people can build anything they want on computers. And so it feels very different and maybe complementary with what OpenAI is doing. I don't know. You fast forward, like, two years. How much of, like, the models that we end up using when we're using Cursor are developed by Cursor versus developed by third parties? So far, we've monotonically just introduced in more places where we're using our own models to improve things. I think that that will continue. I think that the number of models being used in Cursor will also go up over time too. So it's not it's not a zero something. It used to be that there was only kind of one way to use a model in Cursor, and now there's, many different ways, many different types of models that are used behind the scenes both to power the agent, to power tab, to power other things. So, yeah, I don't think it's zero sum. I do think that we are, you know, an early experiment in a company that's between being a normal software company and one of these foundation labs. And one thing that's been surprisingly useful for us there is developing our own models. And so it's something that we're, you know, investing in a ton, and trying to do it in a way that isn't reinventing the wheel with some of the more general API models. That makes sense. Yeah. Cool. That's it. That's all I got. Thank you, Brad. A big thank you to Parker and Michael for that really insightful conversation. Really appreciate the time. As a quick note, my name is Adel. I'm a member of the Rippling team. I've been GM of start ups recently. In fact, I was CFO of the company. I wanted to take a minute to really double click on a few of the topics that Michael and Parker touched on. And really, it's about the growth of AI companies and what we're seeing as a fundamental shift. And in particular, what we're seeing is that these companies are doing a few things differently than companies we've ever seen before. They're growing businesses at exponential rates. So regardless of the category, all these AI companies are growing much faster than anything we've seen in the past. They're really emphasizing the importance of hires, making sure that every hire is able to have a 10 x impact. The kind of impact that you'd really look for only from the engineering org is now translating across the entire organization. And then lastly, I'd say the operations of these AI companies are leaner from the get go. They start off with a much bigger focus on the right tooling, and making sure it scales with their business for many years to come. And all of this means that, the rules of engagement around company building have been transformed with these AI businesses. And there's a lot that, all of us can learn whether we're AI native or AI adopters. And in the case of Rippling, you know, we're here for it. This transformation is something that ties in really nicely with the work that we've been doing over the last, eight plus years. And there's a reason why, you know, the leading AI companies today, that have really figured things out are using rippling. And that's why we say as a pretty common phrase here that AI runs on rippling. We'll talk a little bit more about that. When you look at the most ambitious AI companies out there, whether it's like Cursor or Harvey or Clay, or the other 15,000 startups, right, that are, making that hard effort to build a company from nothing, all of them have chosen Rippling as their operational backbone. And you think about why. It's not just about payroll or insurance or, you know, the admin stuff that needs to get done, but also shouldn't take away from your actual company building. It has a lot to do with Rippling's architecture. You know, Rippling is designed as a single system of record, with the full context of your business, whether it's people or payroll, devices, spend, compliance, all those areas are covered in a very deep way, inside the Rippling platform. And that context is what sets us up to really have unique capabilities around powering your AI initiatives well into the future, across all departments, not just, payroll, finance, HR, and IT. And that's just one corner of sort of how Rippling is adding value to companies in in all spaces, with the AI wave. You know, companies are really scaling faster than we've ever seen before, and that speed, that's coming with extreme go to market fit, is also presenting challenges, but it has some gifts. And one of the gifts is that it's creating, an opportunity for admins and leaders to step up and show their ability to be dynamic and change with the times, but also pick up a lot of the challenges and issues that early founding teams and team members might have had in scaling the business, and really help build the company, for the future. Every hour that you spend as an admin, on compliance or admin tasks or switching and systems or stitching things together, is an hour that's really stolen from building your product, serving your customers, or hiring the best talent. And we know that. And that's why, you know, we are really focused on serving, AI companies, whether it's Cursor, Clay, Harvey, Decagon, all of whom are running on Rippling. And we're helping them in many ways, but I wanna focus on three kinda major points today. The first is that we're helping them win the AI talent wars. So the use of Rippling products, including the PEO, allow these companies to use fortune 500 level benefits, to close some of the top talent in the industry without all the complexity of being a large enterprise. The second factor is eliminating the admin work. So again, we talked about this admin junk that needs to just get done. We automate a lot of those tasks inside Rippling, to help you move faster as you scale your business. And then lastly, we set you up for success in the future. What that really means is scaling without switching. How can we create a back office that scales, every time you level up and hit a new milestone without having to stop and rethink your entire tech stack. So in this era where the growth, of AI companies compounding as quickly as it is, Rippling gives founders back their most valuable resource, which is time. What I wanna present to you today is a really important initiative for us and a way for us to give back to founders. So we've packaged our most impactful products into a single bundle. We're calling it the start up stack. And this curated bundle, has some of the most impactful products that are relevant for early stage founders, but really could be mid stage or late stage companies as well, all that are venture backed, in terms of, the industry focus. What does it include? Payroll for both US and global teams, a modern HRIS and ATS, benefits administration including FSA, HSA, and commuter, but we also include things like the PEO program or rippling HR services as part of this offering. SPAN management is included, which has our corporate card expenses, bill pay, and even our IT products, which includes single sign on device management and inventory. All of which is included in the start up stack bundle. What are you getting with the offer? You're getting six months free. You're getting everything you need to get your start up off the ground, the full suite to power your fast growing start up, plus additional perks like 1.75% cash back, real cash back on your corporate card, not just points or gimmicks, and four months free on our EOR for any of the international employees that you're hiring. So in a nutshell, the start up stack is built, to give you a serious advantage in your company building experience, put you ahead of the curve and unlike real, operational leverage for you that to continue to scale for a very long time as you keep building a very successful business, again, on rippling. The same way many companies in the AI space are building on rippling. Thanks again for your time. Bye.