Scaling mathematical discovery of new optimization methods with a robust pipeline for tuning and testing LLM-generated optimization code on dynamically generated test functions.
Energy, Learning, and Optimization
Scaling mathematical discovery of new optimization methods with a robust pipeline for tuning and testing LLM-generated optimization code on dynamically generated test functions.
For the last few months, I’ve worked on Gigglebot, an iOS app for instantly putting AI images in your text messages. Starting with no experience in Swift or iOS app development, it’s been a huge learning experience, and I wanted to share some of the insights I’ve learned along the way. » Read More
As AI agents have gained traction over the past year, there’s a buzz in the Bay Area: Someone, somewhere, sometime soon, might have the first one-person company with a $1 billion valuation: 1-person-1-billion, 1P1B, or the solo-unicorn.
AI-first companies have already had some impressive exits (Base44 sold for $80M in June 2025), » Read More
At a hackathon in 2023, I worked alongside a rising college Junior majoring in Computer Science who was worried that he should switch majors because AI would eliminate software development jobs. At the time, I brushed off his concerns, as I expected most of the usefulness to be in written language, » Read More
I took part in the SF10X hackathon in early August, and ended up winning second place and $1500 for a project which allowed for scalable visualizations of what the city will look like under the proposed rezoning.
The project was mentioned in the SF Standard’s coverage of the SF10x hackathon, » Read More
This blog has been quiet the last 4+ years while I was working at Teza Technologies, a quantitative hedge fund based in Austin. A combination of financial regulations, company policies, and a culture of secrecy keeps people from talking about life at quant funds- but now that I’m out, » Read More
Conventional centralized optimization algorithms have challenge solving big optimization problems- at some scale, you simply can’t fit the problem on a single computer, let alone manage all of the variables. To solve this, researchers use decentralized optimization techniques to break the problem into a set of subproblems which can be rapidly solved on distributed computers or smart devices- but this exposes the optimization algorithm to cybersecurity threats from hacking these consumer-level devices. » Read More
In many technical fields, a PhD dissertation is written with the mantra “staple 3” – take three peer-reviewed articles, add an introduction and conclusion, and turn it in. In writing my dissertation, I used LaTeX in Overleaf and git to make it easy to reuse content from articles. » Read More
Ethereum makes it easy to run simple computations on blockchains while keeping the value of decentralized networks– but real applications will need access to off-blockchain resources, whether for heavy computations or pulling data from the internet. This tutorial walks through how to deploy a smart contract on a private test blockchain and use Python to read data from the blockchain or send data to the blockchain. » Read More
Support vector machines are the canonical example of the close ties between convex optimization and machine learning. Trained on a set of labeled data (i.e. this is a supervised learning algorithm) they are algorithmically simple and can scale well to large numbers of features or data samples, » Read More