Knowledge

LLMs, Taking 0x to 1x and Beyond Copy

Aug 20, 2025

Github

The greatest strength of a small startup is versatility. With fewer people and simpler processes, young companies can adapt and implement changes quickly without being slowed down by bureaucracy. Given how rapidly LLMs and AI coding tools have evolved in the past few years, startups that embrace these shifts gain a major advantage over larger, slower-moving competitors. At tread.fi, we use LLMs extensively for our product development. I don't think what we do is revolutionary, but over the years, we've balanced the practical and efficient.

Core Values

The actual apps and models we use are not nearly as important as the core values we propagate from the top down. We’ve switched IDEs multiple times and swapped models almost every quarter. Because the landscape changes so quickly, committing to one AI stack feels like a surefire way to fall behind.

What matters more is how we treat AI-generated code, and our willingness to compromise on some traditionally “good” software development practices when speed or leverage matters more. These values have guided us better than any single tool or framework.

  • Prioritize speed of iteration

  • Don’t be scared of sunk cost

  • Single-threaded is better than multi-threaded

  • We are many, but the model is one


Prioritize speed of iteration

Since we aren't at the stage where LLMs can write better code than senior developers, the biggest strength of coding LLMs is the speed at which it can generate code. As such, we've set up our development environment to prioritize the speed of iteration.

Webpack + React is a god send for reducing iteration latencies

Webpack + React is a god send for reducing iteration latencies


The biggest downfall of LLMs is its desire to write something that works. Human developers develop features modularly, writing functions and organically improving designs and even sometimes features over a period of hours or days. The goal for any LLM based coding agent is to take a prompt and write a complete feature. This often leads to many assumptions being made with ambiguous prompts or certain sub-optimal designs that overlook improvements that may not follow the prompt.

As a result, the speed at which developers can take what the AI wrote and run it is crucial to increasing the number of iterations that you can get with the LLM.

The biggest improvement we made to minimize the turnaround time from LLM to testing is hot deployed applications. In both the webserver code (in Python) and front-end code (in React), we prioritize lean dev environments that anyone can set up and give instant feedback from the product point of view back to the LLM. Sometimes Cursor agents might try to spin up the application to do some sort of testing, but this ends up very slow. Instead, we have our agents go and write the code with test files and the developer can immediately visualize and test the completeness of the feature within 5 seconds.

With every feature that an LLM writes, we probably go through 5-10 iterations with each coding session lasting approximately 30 seconds. If the deploy part of the cycle is also happening with 5 seconds, we can reduce the iteration time to a point where 5-10 iterations could take less than 30 minutes.


Don't be scared of sunk cost

In the same vein of thought as the LLMs' velocity of coding, comes the volume of code. Not limited by physical typing speed and relying on a superhuman ability to spit out blocks of working code without any syntax or semantic errors, LLMs are incredibly good at producing large blocks of code quickly.

Try and try again


As a result, we see LLM generated code as an easily dispensable resource. If it took too many iterations to get there, it is often best to take what you learned, throw away the branch and start over.

Many models are very good at producing code, but they are equally as bad as cleaning up unused code. Although, ChatGPT 5 and some of the other reasoning models will be very careful about each line of code, many of the other models will produce repeated blocks of code and forget to clean up unused helper functions.

This helps the non-developers use LLMs a lot more efficiently because they often don't have the discretion to read a change and evaluate its efficiency. The rule of thumb is, if you've been talking to an AI for too long, you probably created a bunch of clutter, you can keep going, but it is best to start over. At the end of the day, you may have wasted a hour of your time, but saved the codebase from a lot of clutter (or a senior dev's time reviewing your changes).

Single-threaded is better than multi-threaded

Traditional software development, especially in big teams, emphasize the importance of Agile and breaking down of epics into smaller tasks. In a way, it abstracts away the product logic into digestible technical requirements. It is a great way to accomplish tasks as a multi-threaded architecture, but with LLMs, we think it adds too much friction.


Junior developer might take 4 minutes to open the right files..


In some firms, 10x-ers are a term to describe individual contributors who can produce at roughly 10x the output of engineers. WIth senior engineers who were close to 10x-er status being encouraged to use LLM IDEs and background agents (Cursor background agent or Codex), they can become context switching, multi-threaded teams as one person.

A single developer who can build a feature end-to-end reduces the need for internal discussions that teams of developers would need to accomplish the same task.

In our development pipeline, we categorize features in terms of size. The large features would take a senior developer a few days to a week to ship. Small features are often minor improvements and are pooled up for "vibe coders" to pick up and work on. Traditionally, these smaller tasks were given to junior developers, but with LLMs empowering the product owners to also write code, the junior developer role has been a bit of a puzzling need. Not good enough to call LLMs out on their mistakes and relying on LLMs to write a lot of the code without the product knowledge of a PM.

We are many, but the model is one

LLMs will never forget if you put it in the pre-prompt


The team have individual minds, but the models are one. We might have different IDEs and often just set Cursor to auto, but the beauty of LLM coding is that we don't all have to learn how to code ourselves.

Sharing information and lessons is as easy as maintaining a common preprompt.

This might seem obvious, but during our weekly standup, we share lessons we learned from LLMs. What kind of mistakes it made, what new things we learned how to ask it to do and what new shiny toys there are to play around with. In a Slack channel called #vibe-coding, senior developers share pre-prompts for the "vibe coders" based on certain ticks they observed during code reviews.

LLMs in Practice

It is quite obvious to the programming community that LLMs do one of two things for two groups of people:

  • Make good developers faster and more efficient

  • Make non-developers somewhat ok developers without formal training

I pushed the second point to the extreme, I made everyone code and now everyone codes daily.

UI/UX designers, product owners and even our marketing guy have Cursor Pro and know how to bring up the React application that points to our shared staging backend server. We used to write many tickets on our Kanban board, but nowadays, the ticket writers can write the ticket to Cursor instead and fix most of the bugs. They can iterate through each change within minutes and create a pull request (without even learning how to Git because LLMs on Cursor can do that for you). The senior developers will review the changes, make any comments, that often get sent directly to a LLM to fix and once acceptable, get merged into main for further testing.

Even without writing any code, everyone can now read and understand our codebase at superhuman speed. Our product manager used to rely on developers to tell them how certain aspects of our trading engine works. It is a very complex codebase that spans Javascript, Python, Go and some Shell scripts. They can just ask the LLM and not only is it faster than asking a developer, it is awake 24/7 and knows every inch of the codebase.

LLMs write much of our documentation now, it can integrate into Gitbook to understand the style of the documentation and follow existing templates and semantics. We ask an AI agent to generate release notes for us. It can query Git for the changes since the last commit and give a list of changes without having us be extra careful of commenting because it can understand code logic.

Our UI designers can still use Figma, but they can connect Cursor to Figma so that it can take a Figma mockup and create the same thing in React, often making correct assumptions about what certain buttons or widgets do.

Cursor is connected to Slack, so we can even ask it to work on tickets without much context. It will read through our Slack messages and similar to a developer understanding the requirements through converation, it does the same, but produces a PR in minutes.

Where do we want to go from here?


  • We are working on a testing suite that consists of a headless browser automating common user journeys and an AI with image recognition dynamically understanding what behaviors are unexpected. This can help us build a very dynamic and adaptable testing framework without deterministic test conditions.

  • We still prefer to do code reviews ourselves because it is the last semblance of human autonomy, but we want to have a multiple step code review process where a LLM can review the other LLM's code and with more tailored pre-prompts, clean up and optimize the code.

  • We want LLMs to be an interface for our application itself. Having a custom trained LLM read our internal discussions would allow it to be a very good quant trader. With some guardrails in place, it could be a dynamic interface for non-traders to execute very complex strategies and make the most use of our engine.


If you want to check out our crypto algo trading platform, please check out app.tread.fi . If you ever become one of our clients and wonder how we get things shipped so fast, hope this helps explain the mystery.

A new generation of institutional-grade execution, built for all.

Subscribe to our newsletter

© 2025 tread.fi. All rights reserved.