I took part in the SF10X hackathon in early August, and ended up winning second place and $1500 for a project which allowed for scalable visualizations of what the city will look like under the proposed rezoning.
The project was mentioned in the SF Standard’s coverage of the SF10x hackathon, and I’ve continued developing it along with Mike Hankin of D9 Neighbors for Housing (Calvin Rogers, Vivien Kong, and Joe Foster were also on our team). Our final presentation is here, the project page is at emunsing.github.io, and below are a few notes and lessons from the process.

High level process:
- We used data from Salim Damerdji’s rezonesf project, exported from rds files to parquet files and Geojson files
- In Python, we simulate the development probability for each lot for each year in the study period
- In Python, we then work with the geojson files to identify the “front” of each lot in order to compute rear-yard setbacks (thanks, Mike!).
- With that base building footprint, we can create a polyhedron in Blender using bpy, and apply textures to each of the walls and roof faces. “Unwrapping” the Blender polyhedron and applying an appropriately-scaled, appropriately-oriented texture was surprisingly difficult.
- For working with Apple ARKit, we export USDZ files which are then loaded directly into ARKit.
- For working with Google Earth, we export DAE files and a KML file (with the coordinates of each object), and zip it into a KMZ archive along with the texture material. This allows us to load ~thousands of buildings into Google Earth so that we can “explore” the new city.
- For creating videos within Blender, we use the Blender-OpenStreetMap (BlOSM) plugin to download 3D tiles from Google Earth and appropriately place them. The z-registration for BlOSM is not convenient, requiring additional work to make sure your buildings aren’t underground or floating in the air.
Some notes:
- Google Earth is a terrible platform to develop for, but it’s the easiest tool to use and there aren’t better open-source competitors. The fact that it manages 3D tile downloads for free is nice. The fact that you can’t script camera angles, image generation, lighting, shadow studies, or integrate it with 3D objects other than a KMZ are all huge drawbacks. Occasional glitches in tile alignment make it a headache to develop for, and can create terrible visual artifacts. I would *love* it if Google would open-source a version of their frontend into which you can drop your 3D tile API key.
- I hand’t used augmented reality (AR) or Apple’s ARKit before, and had high hopes that we would magically get photorealistic renderings of buildings in their correct location. However, Geospatial ARKit was really underwhelming: objects that are placed in the world will be visible on your phone screen, no matter what buildings, cars, peoples, and trees lie between your camera and the object. I’m surprised that people were so excited about Pokemon Go, given how poorly it handles object occlusion and how little development has happened in the last 10 years (pokemon can still hop behind walls and disappear). Getting good occlusion quickly seems like an incredibly helpful and important tech advancement to make AR engaging.