Is Generative Design Doomed to Fail?
Updated: a day ago
Recently my friend Daniel Davis posted an article called Generative Design is Doomed to Fail. It is no doubt a fascinating article written by Daniel, and has some very poignant analogies, and logic on why Daniel thinks that Generative design is not a pragmatic process, and is "doomed to fail". I would like to first state that I have the highest respect for Daniel as a professional, a human being, and an industry leader. Daniel is the former Director of research for WeWork , has spoken at multiple events, and has even been published in Architect Magazine including on the topic of "Can Algorithms Design Buildings". He is no doubt a very smart person. All that to say, I felt compelled to author my own personal opinion and review of the most recent article "Generative Design is Doomed to Fail".
First off, there are many points that I agree with Daniel. One such point is the more technologies get adopted, the vaguer our definitions of them become. For purposes of his article he use's the autodesk definition of Generative Design as (1) designers define the project’s goals, (2) algorithms produce a range of solutions, and (3) then designers pick the best result. We will try to stick close to these, but there are even hybrid models of these using "co-design processes" which is perhaps where I personally feel these tools become very real and pragmatic.
Daniel outlines "6 major reasons why generative design is unlikely to progress". Let's take a look at them together...
Point #1 - You’re on the hook for generating the options
“....there is no pre-built mechanism for generating all the design options. Instead, you have to create your own system. From scratch."
I actually agree with Daniel in part. With Project Refinery, formerly Project Fractal you essentially have to build your own algorithm in Dynamo and feed it to Refinery, essentially building your algorithms from scratch. That said, even still we (EvolveLAB) have been pretty successful in delivering projects with this approach. Take for example the Hobbs Trail project we helped Hufft with.
Image Source From Hufft
We were able to create the geometric algorithms from scratch, feed it to Refinery for interrogation, and pick the most optimal option.
Hobbs Trail Generative Design Study by EvolveLAB
Master Planning Study executed by EvolveLAB using Galapagos
Both of these examples are built from scratch and I would put both in the camp of a successful exercise. However all that to say, there have been quite a few generative design products that have emerged recently in the market where you don't have to build your own system and/or algorithms from scratch. For example, you could leverage testfit.io for developers which utilizes pre-built packing algorithms, KPI's, and scheme interrogation.
Image Source testfit.io
Another one of my favorite examples for this counterpoint is Hypar. With Hypar you utilize a set of modules (pre-built algorithms) that can plug and play with Hypar's geometry engine.
Source - AEC Magazine
Point #2 Quantity doesn’t substitute for quality
"The thinking is simple: the algorithms can’t tell good ideas from bad, but they can create designs incredibly quickly, so if we rapidly produce hundreds of options, we increase the chances of inadvertently generating a good design. …. one hundred shitty designs aren’t anywhere equivalent to one considered design. If your software was any good, it’d produce fewer designs, not more."
One of the favorite stories as a counterpoint to this was my experience working for one of the largest healthcare design firms in the world. I was tasked with assisting our design principal with generative design options for Hospitals. I remember one such project that had 21 options, and each option had a sub-option, i.e. 21a, 21b, 21c. I assure you these were not the best options, simply the number of options I could crank out in an 8-10 hour workday. Now what is absolutely insane to think about is that Warren Buffet has the same 24 hours you and I do. Despite his riches, he has absolutely no advantage over you explicitly in the context of time. Now, imagine you don't only have 8-10 hours, but an infinite amount of time to design an infinite amount of options for your project AND you are able to leverage data to help problem solve. To paraphrase Daniel, quantity for quantity's sake is not a substitute for quality, however, if you can interrogate more options via problem-solving utilizing data? That's black magic!
A great example of this is what Brett Young is doing at M2x.AI, where he is leveraging generative design for mechanical spaces. Have you ever had to 3D coordinate a project? It's grueling process with a lot of meetings, and engineers debating whether to move ductwork, fire protection, and conduit. Wouldn't it be awesome to have generative design help us optimize these kinds of spaces?
Source - Brett Young at M2x.AI
Point #3 - Comparing options is harder than it looks
"as is often the case for generative design, since there is no clear winner, leaving us to make a seemingly impossible choice between nearly identical options."
If we are simply comparing models visually, I think that certainly become the case, however leveraging scatterplots, parallel graph charts, and cross-product interrogation you can actually find a "clear winner" depending on what you are measuring against in your generatively designed models. In the below example we are leveraging Refinery's built-in data visualization to optimize for view ratios.
View Ratio study executed by EvolveLAB using Project Refinery
This new way of comparing models using data is in fact a newer process in mainstream architecture. Leveraging dat-driven design helps you make informed decisions about the projects that are important to you and the projects that are important to your clients.
4. What you can measure isn’t what matters
In the field of architecture, there’s no consensus on what constitutes good architecture and no established ways of measuring it.
Today, we might look at solar gain or view analysis, which is a component of architectural performance but not the full story. Perhaps in the future, we’ll be able to quantify other more visceral aspects of architectural performance, but I’m not holding my breath. It is easy to inadvertently optimize for the calculable rather than the important.
This is probably more of a philosophical or subjective statement in my opinion. This assumes what you can measure isn't what matters, but what if what you can measure IS what matters? What if you are able to optimize the project to what is important to each owner?
For example, measuring and minimizing travel distances for a healthcare client so nurses can get to their patients faster...
Image Source Alvaro Ortega Pickmans - Isovists
or measuring and maximizing energy efficiency by minimizing aperture opening based on solar studies.
Millenneum Airport project study produced by EvolveLAB
Granted, to Daniel's point we sometimes have a hard time getting the "full story" (apply generative design to a whole project). It's very difficult to have a software account for all goals for all clients for all projects. In my opinion, the argument that you can't solve all goals for all clients on all projects should not discourage us from leveraging generative design for very targeted applications, i.e. travel distances.
5. Designers don’t work like this
"On a real project, you’ll never get it right the first time – the generative design algorithms aren’t good enough, and the circumstances of the project will change once you’ve created your first draft. So you have to make revisions. And generative design doesn’t accommodate revisions since it assumes the design process only moves forward. Designers don’t follow a linear process....To make a revision, you either need to throw everything out and start the generative design process again, or you can abandon using generative design and make the change manually. Either way, generative design makes it hard for designers to work iteratively."
I very much agree with Daniel's point on this. Designers typically don't work linearly, which actually validates even more of a reason to leverage an iterative/generative design solution. Using the method of Generate->Interrogate->Iterate->Repeat, this will help us to create design quicker, and be able to iterate design options much faster.
Further, compartmentalizing parts of the project for generative design opportunities allows us to minimize impact to the rest of the project while working iteratively on other parts. No different than Revit design options.
Example EvolveLAB tile pattern study using Project Refinery
6. No one else works like this
"Adobe isn’t holding press conferences saying that generative design is the future of graphic design"
I haven't seen any publications from Adobe in this space, but I have seen many people using generative design in other industries where Adobe products are traditionally used, specifically in the space of graphic design for websites and logos. Wix (who we host EvolveLAB's website through) announced their own generative design solution for websites utilizing artificial intelligence.
Further, there are plenty of Generative Design companies online that will design you a logo. Here is just one example from freelogodesign.org.
"Until we get to a point where algorithms replace designers (which may never happen), algorithms will only be practical if they work with humans."
As a closing statement, I'd like to end on agreeing very much with Daniel on this point. I think Daniel and I are very much in agreement that Algorithms in the AEC space are not intended to replace designers but are intended to work alongside designers. I personally believe that the sweet spot is a co-authoring process where we leverage generative design to solve VERY targeted problems while having the ability to override solutions that have been generatively designed with our own.
Iterative Planning tool developed by EvolveLAB for TVS Design
Further, I don't think it should be an all or nothing. Similar to any process, we should use the right tool for the right job. We should use (gasp) AutoCAD when it's appropriate, Excel when it's appropriate, and Generative Design when it's appropriate. To come full circle with our first example of the Hobbs Trail project; even though we used Generative Design for helping us rationalize the geometry for the Hobbs Trail project, we also use Revit, Rhino, Paper, and Pencil.
In closing, I would encourage all of us to leverage the right tools for the right job and to engage in respectful and kind debate. It is my opinion that Iron sharpens iron. I'd like to thank Daniel for his post and for always selflessly contributing to our industry. I am indebted to him in helping me challenge my own personal beliefs and thoughts around our industry.
About the Author: Bill Allen is CEO, and President of EvolveLAB, Disrupt Repeat, and On Point Scans. These firms synergistically help Architects, Engineers, and Contractors optimize the built environment. He has over 15 years of experience managing technology for buildings in the AEC industry for cutting edge firms. Bill Allen has been a keynote speaker as well as featured speaker at multiple events, and has the most watched Autodesk University talk ever "The Future of BIM is NOT BIM, And It's Coming Faster Than You Think". Bill has also co-founded the non-profit - The Bare Roots Foundation, an organization that believes each human being deserves the rights to basic human needs including shelter, food, and clean drinking water.