Introducing Project Blue
The best leaders are those the people hardly know exist.
The next best is a leader who is loved and praised.
Next comes the one who is feared.
The worst one is the leader that is despised.
If you don't trust the people,
they will become untrustworthy.
The best leaders value their words, and use them sparingly.
When she has accomplished her task,
the people say, "Amazing:
we did it, all by ourselves!"
Being a senior software engineer is hard. You’re expected to contribute and have impact, but at the same time, much of what you’re expected to do sounds an awful lot like doing nothing. I don’t mean to say I do nothing, but how do you quantify indirect impact through other people except by assigning your words to their actions?
Pattern matching.
Repetition.
It’s about making a story that can pointed to as stationary in time, repeatedly throughout time.
It’s about having a mark.
It’s about making that mark repeatedly.
And then teaching other people to make that mark.
Eventually it’s about teaching other people to teach other people to make that mark.
That’s where this book comes in.
I’ll be perfectly transparent: I don’t always know how to have impact; when I feel this way, I write for clarity. Tonight I recognize that writing is the best way to have impact. I usually write for me.
Not tonight.
I had an epiphany tonight that’s too good to not share: The best way I can have impact at work is not to enforce my ideas, but to take the awesome team I work with and encourage them to grow in ways that are amenable to long term health.
It’s about identifying their marks.
It’s about teaching them to make their mark.
It’s about teaching them to do it repeatedly.
I’m a bit lucky here. When you’re blessed with good co-workers there’s not much heavy lifting to do. Instead, I’m left feeling imposter syndrome in the best of ways. I have a Ph.D. in distributed systems from an Ivy League university and fresh undergrads regularly blow me away with the insightfulness of seemingly innocuous or basic questions. The level at which people think these days is truly outstanding.
Either that or Chroma is a special place to work.
So why start on chapter 17 of the Tao Te Ching?
Leadership.
Meta-leadership
A fixed point of leadership.
It’s about teaching others to identify marks.
It’s about teaching others to grow a mark.
In short, it starts a dialogue about leadership. Senior leaders need to heed the warnings here. Too often, people trying to be senior try to lead through force, dictate, mandate. Instead, they should lead through supporting others, by being approachable, by embodying the values they want to instill in others.
Monkey see, monkey do.
It’s about putting out into the world that which you want to receive.
The universe has a way of bringing together like-minded people if only because they’re the only people who can understand each other. Embodying the actions, thoughts, and ideas one wants to cultivate is the most sure fire way to reinforce those learnings in others. It’s not the most direct way to get what one wants on a short time scale. There’s a training period in which one has to admit room for loss and adjustment.
To accelerate this process, the best senior engineers are ones that know how to make others grow.
It’s about teaching a process.
It’s about forking that process.
17’
Paradoxically, the best way for a senior engineer to grow others is to do nothing. I don’t mean to sit on one’s ass and twiddle one’s thumbs, but to practice a type of non-interference—-non-action—-that encourages ownership and learning from a process. The process will grow and change with time, but what should remain constant is the senior engineer’s commitment to support the junior engineer through the process.
This means trusting junior engineers with tasks, but providing support as needed in a way that encourages them to adopt and adapt the process to their own work ethic.
The role of the senior engineer, then, is to do the work up-front; I don’t mean literally do the work, but to do enough work so that the next few conversations will have a predictable outcome that helps to grow the junior engineer into something more.
What does, “Do Nothing,” refer to, then?
Allowing the process to unfold.
In Art and Fear, David Bayles and Ted Orland lay out the anecdote that a pottery class experimentally showed that quantity over quality yields both, while a focus on quality alone yields neither.
Process is like that, but to a non-linear, non-quantifiable way.
What does it boil down to?
Culture.
The best senior engineers recognize that their impact comes what others do at their behest. A single person can only write so much code. A single person can inspire a revolution to write code.
Given the option between the two, which would you pick?
36
If you want something to return to the source,
you must first allow it to spread out.
If you want something to weaken,
you must first allow it to become strong.
If you want something to be removed,
you must first allow it to flourish.
If you want to possess something,
you must first give it away.
This is called the subtle understanding of how things are meant to be.
The soft and pliable overcomes the hard and inflexible.
Just as fish remain hidden in deep waters,
it is best to keep weapons out of sight.
It’s no secret that enlightenment’s a thing people chase after. Whether it’s real or not is up to your beliefs and how you define enlightenment. Deep peace with oneself? Probably. Telepathic communication with other enlightened beings? Probably not.
In between lies some truth.
I imagine this passage is about something akin to passing of the torch from one generation to the next. From master to student, repeatedly; a braid across the generations growing ever thicker as more people weave into it.
Engineering’s like this, too.
The best engineers don’t go around bragging about how good they are. Instead, the best engineers cultivate curiosity and self-criticism—-in themselves and others.
Great leaders are not necessarily the best engineers. They are able to identify the best engineers early and turn potential energy into kinetic energy by fostering an environment in which self-criticism and self-reflection can be practiced safely and experimentally.
Talent and gifts are not fungible.
The best recognize that and work to cultivate both the gifts and talent, and the overall culture of the place in which they find themselves.
Culture.
That’s the key. Culture should encourage humility and the ability to blend. Engineers who flaunt their gifts and talents—-braggarts, mostly—-won’t be as able to have impact as the engineers who people want to talk to. If you want to spread an idea, you do best to share it early, achieve buy-in, and get others excited.
This is where it’s important to understand one’s own strengths and weaknesses. For example, someone with a strong voice will find themselves unable to communicate with others effectively. What sounds internally like a helpful offering of knowledge sounds externally like telling others how they ought to do things.
Learning to temper one’s voice is a form of learning to have subtle influence. Pointing, mostly direct pointing, is the way. Subtle influence instead of direct persuasion. Compassionate understanding instead of forceful argumentation.
Intellect is the senior leader’s weapon, but intellect is intimidating. Fifty-four percent of people read at or below a sixth-grade reading level. And they can be brutal to intelligent people. Raw application of intellect is unlikely to yield results because it is too forceful.
Instead, the senior leader’s methodology should be to lead through example—-to lead the charge to do new things or do things in a new way. To place the idea in the right person’s mind. This requires quite a bit of care and self-knowledge to execute effectively. It requires being able to understand others’ talent and thought processes from the outside sufficiently well that you can ensure the idea will begin to take root inside.
Testing Reality
Multiple chapters of the Tao Te Ching reference exercise of the body and mind as keys to longevity.
Exercise—-or a metaphor for it—-is crucial to software engineering. Software testing should be the bare minimum we expect.
I argue for much stronger: Continual, positive signal that things are working exactly as intended. This means that there is a positive construction in reality that mirrors the reality of the system under observation.
In many spiritual traditions, reality is expressed as an illusion. Something you cannot actually see, touch, feel, or otherwise encounter except through limited sense organs—-things that aren’t actually reality. Psychology is full of cases where senses contradict something objectively measurable or otherwise induce chaos.
In software engineering, the executing program is reality. Only viewable through instruments that we attach to it. Except we control it. Completely. And we can make it tell us things.
Yet we treat it like we treat and approach the reality masked by our sense organs.
To be reliable software doesn’t just need to work, it needs to give the positive appearance of working to all who touch it. I don’t just mean a spinner working its way through an animation, I mean that the system finds some way to externalize to users that it is doing what they expect. For the average engineer, this may mean having log lines that say, “I did something.” This should be the bare minimum we expect.
I argue for much stronger: Continual, positive signal that the construction in reality matches the fiction (it’s representation in our sensory experience). This means that there is some way the user can compare the actions and state of the system against some reference.
For a database, this reference may be a cryptographic checksum of the database’s contents.
For an email application, this reference may be a complete history of every email ever sent.
For a text editor, this means displaying the text exactly as the user expects it.
In all scenarios, the fiction not only approximates reality, but matches it to the limits of reasonable precision such that 99.999% of users will never spot a bug, and those observant enough to spot the bug would be reasonably intellectually equipped to remove the bug. That’s one in one hundred thousand people. A handful of people in each city in the world.
Software should be held to this standard.
Software developers should hold themselves to this standard.
Because it’s easier.
It’s easier to construct an instrument-capable, an instrument’able, system and prove that it works when you can maintain a positive signal like this at all times. The signal gives you the confidence to make broad changes and know that nothing fundamental is breaking.
Transient Extremes
It’s easier to test your system at two orders of magnitude more than you promise to customers because that buys you headroom to think about your next step. And it’s easier to buy yourself two orders of magnitude at the design phase than after implementing the first idea that came to mind.
Software, therefore, should be continually tested in environments that persistently present transient extremes. That’s what real software sees during incidents, so a system that can do that will uncover more problems than the production workload.
Too many people want their testing workload to mimic their production workload. With a good architecture, the production system tests and continually verifies the production workload. Testing is for uncovering new failure modes. Testing, therefore, should present a fiction of the production workload—-an amplification, but a fiction.
The value of this technique pays off when every component gets rigorously presented to all imaginable extremes at the design phase. The resulting system will cost less overall to take to many nines, if it can be taken there. Imagination and vision are cheaper than experimentation and elbow grease.
That’s the problem with my approach and I’ll be honest about it: Vision and execution rarely go hand in hand and my track record favors vision. Society favors those with vision and execution, hailing them as leaders. But, the people with execution often lack vision—-chasing someone else’s vision; I don’t mean to belittle people of execution here, but every counter example you can come up with is definitionally chasing their own vision or their own tail.
On the other hand, to lack execution and have vision is probably worse: It means being confined to the realm of words, of phantoms. People put faith in a good demo; they rarely put faith in words—-especially in this era of ChatGPT. People just don’t listen to words anymore because words—-even carefully thought words—-are no longer meaningful to people.
This is all a hypothesis. I’m calling my shot on this one.
Solving the Right Problem
I believe in writing tightly-scoped, single-purpose tools that solve the right problem.
Build the tool right. Once. Then leave it.
Or acknowledge that it will be deleted some day and that’s OK.
There’s no substitute for having a tool that can be built upon and become the bedrock for other tools.
That requires delivering an atomic unit of value completely.
And solving the right problem.
Note that I am not arguing for an agile philosophy, necessarily. Agile would say that you compromise the output. Instead, I’m suggesting you compromise on anything else to get quality, for quality—-and by proxy, craft—-is the single biggest differentiator between great software and the best software.
When laying out a software system, it should focus on quality as its differentiator. Compromise on what it does to get quality. Not compromise on quality to do more. At some point you’re not doing more, you’re doing different of sufficiently degraded quality that you should not say you’re doing the same thing.
And that’s the problem with software.
With words, really.
Words distract and become muddled.
Proper concepts become merged when there’s a technical, rigid, subtle, and important distinction.
Blurring of related, but not totally isomorphic problems together is what leads to a degradation of quality. It’s not good enough to say, “That’s almost the same.” Computers operate in binary, so should the way we evaluate their quality.
Thus far I’ve evaded a discussion of quality because quality is about the fitness between problem and the actual problem to be solved. An impedance mismatch of sorts; or, the lack thereof in a quality system. It’s about minimizing frictions between reality and the fiction that gets constructed for observing reality.
Maintenance and Growth
Achieving quality in software is no easy feat. I believe that the key to quality is to explicitly have a plan for what is “good enough” at which point, the software gets set on a course toward death. It may live forever, but it gets set on a collision course for death.
I believe that growth should come in the form of a new tool, less than a change to an existing tool.
Taking this to the extreme, software should be designed in a way that new features come from new code, not from changing existing code.
I believe the key to doing this is to make a choice early on, and it is a choice: To complete the project in a way that’s truly complete with quality, or to cut scope until doing so is possible. To be clear, I’m not laying out a choice, but an algorithm I choose to follow.
Building each atomic unit of functionality as a complete, quality tool ensures that quality can stack on top. In software, errors compound, and the pursuit of quality is all about exponential gains and losses.
Quality compounds too.
Quality and maintenance go hand in hand: Software of high quality maintains itself. I don’t mean that metaphorically or indirectly, I mean to say that we should be building software so that it can maintain itself. This means code that’s self-aware, but not sentient or conscious. It’s code that knows it’s code written for and run by humans, so it has had more thought into how to read the fiction presented to the user than was put into the reality it executes.
That’s the key to quality: Down-scope the fiction to something achievable with quality with the resources available, and only then design the system that powers it.
Too many people design the system’s behavior and think about monitoring it as an afterthought. If the system has baked into it the notion of progress, then it can monitor its own progress and—-eventually—-agentically—-begin to take action in response to not making progress.
The more behavior a system exhibits, the less it becomes possible to make a self-maintaining system. I don’t mean to say that it’s hard; I mean to say it becomes less possible—-eventually impossible. Describe a swiss army knife in one hand motion. Now describe a hammer.
Negative Spaces
Mentorship, like striking art, is as much about what’s not there as what is there. What’s there shapes, but what’s not there gives room for growth.
A good leader spots areas for growth and constructs scenarios that will naturally allow that growth. Proper growth cannot be forced, for doing so encourages growth along the wrong axis. Further, growth is highly personal and no one individual should presume to know the real growth areas of another. Hidden disabilities, struggles, and biases—-all innate properties of what it means to be human—-abound.
Like the fiction of our systems, the perspective we have of ourselves from the inside is very different from the perspective we have of other people. We see our own thought processes, our own rationales, and our every waking moment. For that reason it is easy to forgive ourselves and very hard to understand others.
A good mentor will tear down their own view point first to arrive at the world of possibilities in front of their junior. And then, in an effort to reconnect with youth and fresh ideas ask questions.
What do you see?
What do you not see?
What do you expect to see?
What do you expect to see, but don’t see?
Four questions that cover all blind spots. Every conversation should be motivated by variants on these four questions.
If you want to be known for asking truly insightful questions, it’s easy: Most people will ask the first smart question that comes to mind. And they usually look the worse for it.
The best questions result from a personal inquiry that arrives at a contradiction so strong the person asking the question exhausted the limits of their imagination seeking an answer.
I had an advisor in grad school who told stories of his advisor who would pick hard questions that couldn’t be answered by the student, and then when the student presented a solution, flip-flop on what he expected from the answer so that the student was thrown off course. He did this until the student no longer sought his approval.
This is bad mentorship arising from pattern matching: Good mentorship breaks people down; bad mentors break people, good mentors make people learn to break themselves.
The Verifier Zoo
I have a personal engineering philosophy that systems—-especially production systems—-should be broken into two components: one half that does the work and one half that verifies the work. One half rooted in the system’s reality, one half rooted in the operator’s reality. The verifier should be consuming the fiction presented by the system—-dashboards and logs—-and recreating the activity of the system in a way that cross-correlates to the maximum extent possible. This is how you write bug-free software with confidence.
There’s a hidden benefit to this: If the logic of the system is embedded within the verifier to recompute the effects of the system—-something that I believe will happen naturally—-then that logic will inherently be testable, at scale, in a way that only observes. Different, independent versions of the same software can observe the production system and so long as they agree on the state of the underlying system being observed, we can say that the system adheres to all verifiable specifications.
This is important to fully understand because it has non-obvious potential.
I imagine that people will, at minimum, structure verifiers to sample production data and replay it against logic as if it were a unit test—-the production system gives a verifiable input and output that we can feed as input to an assert within the test. Forwards and backwards compatibility at all times, letting you know just how far you can or do have to rollback during an incident.
A form of continuous integration, continuous deployment.
That’s the minimum we as operators should demand from our systems.
I want to go a step further.
I heard a story once that the early versions of GNU were innovative not because they were having fun but because they needed to solve a real problem: They needed UNIX compatibility without being allowing anyone the possibility of a copyright infringement lawsuit. The solution was to innovate. Where others were memory constrained, GNU would use a memory-rich algorithm that was optimized for CPU. Where one data structure was in use, another, differently characterized data structure would be used. The list of ways to be clever is endless and lost to time or my inability to come up with the right magic phrase for Google.
I imagine that writing a novel verifier is an onboarding project and that teams maintain a leaderboard of number of bugs caught by verifier. A competition among the team members to find bugs before they can hit production. Or soon after they hit production, as will inevitably, eventually be the norm. I say that with the best of intentions. Once all the low hanging fruit is gone the only bugs that will be uncovered will only show themselves at scale. And scale requires resources; why waste them on simulating something at scale when you can use those resources elsewhere as replication and backup of the production resources?
Only once you have a verifier zoo should you setup CI/CD to let you roll to production automatically, day or night, with confidence.
Choice
Choice, more specifically the absence of choice, improves reliability.
In the realm of software behavior, a piece of software with only one defined path from input to output tests itself on every successful execution. A piece of software with conditional behavior exhibits exponentially more choice with each condition, which means it’s likely some cases will never manifest until a black swan event.
During a black swan event is one of the few times when you wouldn’t launch new code to production without care—-and activating a new, untested, code path is almost certainly launching new code to production without care.
So what can you do?
I have seen people mirror production traffic to a separate cluster. This strikes me as a liability and compliance nightmare.
I have seen people have feature flags in the code. This strikes me as an exponentially-growing pile of technical debt.
In all cases, the complexity comes from splitting information into two pieces in a way that neither piece is the authority.
Instead, we should structure our systems so that production and the fiction it presents are both the authority so they can be cross-checked. Forking customer data? Fine, but do so in a way that the fork and the upstream can be compared and reconciled. Both are then the authority, and it’s not so much a fork as a backup. The downstream processing will diverge, but this presents yet another opportunity: If you’re going to go so far as to setup a shadow of production, treat it as a reference against which the data in production is compared. This allows the shadow production to ship code and confirm it doesn’t change anything. When shadow production does change, the changes can be audited before rolling out.
Let’s deconstruct what people are trying to get out of shadow traffic: They want real-world assurances that data they will see will pass through their system without tripping any of the alarms or failure conditions they’ve programmed the system to expect. Usually so that they can let a piece of software “bake” in production without causing problems. This is the bare minimum we should expect.
I advocate for something much stronger: Systems should be built with the intent to allow cross-version verification. One such example would be the verifier zoo’s disparate implementations, but you could imagine exposing a sample of some production data—-in house customers, namely—-to CI via a first class primitive. Imagine being able to say X\% of RPC traffic over the last day successfully passes with new code passing through CI.
That’s what people seem to want with testing production in a separate cluster.
Feature Flagging
Feature flags are like democracy: Terrible, but better than everything else to come before. Once upon a time I worked with someone who taught that the right way to feature flag was to add the flag in such a way that the difference between the feature-flagged code and the code without feature flags was the presence of a block saying, “If feature not enabled, then return an explicitly error.” Otherwise, outside the block, execute the feature-flagged code.
This approach has its merits: The code enforces the idea to callers that the feature may be unavailable and the only way to know is to call into it and handle an error. In fact, it can use the same error handling as when the server is physically unavailable. This enforces good engineering discipline, and makes it so that a feature may always be marked as unavailable—-the feature flag may be left around as a kill switch. Thus, the same code that handles feature unavailability due to failure handles feature unavailability by fiat. Removing the feature flag requires removing just three lines with no other changes to the code. More invasive changes than that run the risk of an errant change making its way to production.
By making this one choice to have features be additive—-be killable at all times—-we buy ourselves something in our architecture: Monolith or micro-service, the feature is decoupled from its user. When the feature is available, the caller can use its output. When the feature is unavailable—-whatever the reason—-the caller handles it as a partial failure.
This requires a product skill most people empirically lack: Finding ways to mask partial degradation in the experience of a product.
People who master this skill transcend the monolith versus micro-service debate and gain the ability to truly distribute ownership without compromising resiliency.
Second Fiction
There’s a second fiction about our systems—-the one that sits in our mind and interposes on our thoughts about the system we can observe. The brain doesn’t take in raw signals, but instead puts forth a hypothesis that it seeks to confirm or disprove. Take that in for a second—-it’s not observing reality, but repeatedly generating fiction until something passes the test for being highly likely to be real. It can be wrong.
Therefore, great engineers remember the system, but the best engineers remember how to approach the system. When there’s a bug, the system is inherently producing a positive example. Some engineers—-especially junior ones—-will stare at a new bug in disbelief. “That cannot happen,” they exclaim. But it is happening and we have proof of it; why is such a reaction typical in people?
This is the second fiction of systems—-I think of it as a phantom or ghost.
A good debugger sees through the phantoms and ghosts.
There’s only one way to achieve this: Self-knowledge through the practice of craft in a safe space that encourages growth.
A good debugger knows what it is they are looking at—-what they see. They know that when they see an error message it could mean any number of things; they admit the biases of their past to help guide the search for meaning in this moment, but keep an open mind as to the true meaning yet to be discovered. They see exactly what’s there and nothing more.
Or, as I’ve found, they see more and know what questions to ask to prune away hallucinations of the mind. Mathematicians call them assumptions. It would be folly to throw away such things—-each one is a reaction of the brain to associative inputs. Each one is a complete thought about how the system should work, and if you can identify the core truth being asserted you either rapidly discover a divergence between reality and your mental model or take one step closer without any divergence. Both are useful steps forward.
A good debugger simultaneously considers three variants of the system: The “real” thing, the fiction it presents in reality, and the fiction in their mind. A good debugger can rapidly switch perspectives from one fiction to the other in order to make assertions about all three.
In short, a good debugger debugs themselves first, the system second.
The number of rules in play matters
Ever play one of those board games that takes as long to understand the rules as it would to play, if the play didn’t take 10x longer for having to repeatedly re-consult the rules?
Some people build systems like they’re building board games. They add rules and modules and components and whiz-bangs incrementally, ending up with a system that’s a rat’s nest of complex emergent behaviors most of which will not be intentioned by the author.
Instead, we should seek to make choices in our system that stem from consistent philosophies. If in building a key-value store, you recursively use the key-value store’s interface to implement the same, you should always make decisions to re-use the key-value store unless for an overriding good reason. This is a cousin to the end-to-end principle which generally states that you should not put in the middle that which you must do at ends anyway, unless for performance[^1]. In this case, our principle is that we should not make a new choice where an existing choice will do, unless for an overriding reason so compelling, it is worth making a new choice.
[^1] J. H. Saltzer, D. P. Reed, and D. D. Clark. 1984. End-to-end arguments in system design. ACM Transactions on Computer Systems 2, 4 (November 1984), 277–288. https://doi.org/10.1145/357401.357402
That sounds tautological, but there’s a very simple rule at play: The first choice you make is to make compounding, binding choices. When you make choices you restrict what’s possible in the future, so general choices should be rare; however, specialized choices—-those that matter in a very specific context—-lead to fragmentation when they diverge or general choices where repeated.
You’re always making a choice that impacts the future, so make choices that enable behavior rather than restrict behavior. This is something that cannot be written about adequately — you’ll know it when you see it. That won’t stop me from trying to write about it, however: Consider a physical analogy where the manager for a construction site mandates a certain screw head for all screws on site. At first glance, this restricts behavior, giving less choice in the screw to use, but for that restriction, we enable workers to buy one tool and be usable on screws across the entire construction site. The choice that enables always comes with a restriction.
Making the right choices is something even master craftsmen get wrong often enough that to write about it would be to codify more errors in the reader’s mind than could possibly be corrected.
Instead of saying what the right choices are, I will assert that, drawing from my experience prototyping systems for decades now, there is a correct order in which to make choices that dramatically cuts the work to be done. Some choices naturally inform other choices, or even better, make them irrelevant. Making those choices first can help to down scope a project’s implementation without sacrificing its vision. And making a series of smart choices can enable a small implementation to do 90% of a big implementation in orders of magnitude less code. Less code is a proxy for less work to be done.
But less code is also a proxy for fewer bugs.
Fewer bugs also implies less work to be done.
A positive feedback loop.
Care during the planning phase pays dividends during implementation which continues to pay dividends during maintenance.
Thinking Hard
I believe every system should get built three times more than people realize: Once in second-fiction, once in reality, and once in the first-fiction that connects them; that is, once in the architect’s mind, once in software, and once via a monitoring platform that demonstrates the two work the same.
Where to allocate strategy is up for debate, but there are some obvious choices to be made. If you allocate all strategy to the second-fiction, nothing gets done. If you allocate all strategy to reality, you will find yourself with a mess you cannot control. But you have to allocate some to reality in order to have impact, so let’s allocate our first piece of strategy-pie to the real system. Where does the remaining pie go? Balance. A system with no monitoring has no observable change in the world except externalities from the perspective of the vacuous dashboard.
There needs to be some investment in both fictions; one as a reference point, one as a comparator. Without one, there’s nothing to say what’s “correct” on the dashboard. Without the other, a dog’s left chasing a Ferrari, without any hope of understanding where it’s going or what it’s doing.
This is an easy problem. The mind is trained to think, but you have to know how to work it to get results. Just sitting down and staring at a problem is unlikely to achieve an answer.
To truly be a hard thinker, you need to have a process for internalizing a problem so completely that it becomes possible and likely that dreams and flashes of the problem show up throughout the day—-signs the brain is actively working on solving it.
It’s for this reason that some individuals need a deep pipeline. They are able to work on things passively and in volume because their brain is literally wired to jump around. They pick up the task at hand and work on that—-and it has to be a non-thinking task. Then that task becomes the lens through which the backlogged is view. New ideas, new connections, new thoughts form.
Then, there needs to be a net for capturing thoughts. Most thoughts aren’t worth thinking twice. Most thoughts will happen twice to me and I can chase the echo. Thus I can sometimes lose conversation to chase a thought because the echo is many seconds. I watch my every thought to the extent that such a thing is possible. I can’t see my heartbeat or anything—-I’m not superhuman—-but I can see most emotions and feelings and thoughts stand out as distinct entities in my brain that are not me, but also do not afflict me.
This is my process.
There are many like it, but this one is mine.
Consistent, internal discipline.
Leadership vs. The Fictions
We started this journey with leadership.
Then we looked at principles of system design.
A good leader knows a third fiction: Group think. Herd mentality. The box in which we all think that we must escape because it’s table stakes.
Life is true table stakes.
In order to survive, we need to come together to support each other.
A good leader knows how to hear complaints without encouraging them.
To offer support without building demand for support.
To make everyone give that which they give freely in ways that create a support network for everyone.
Doing this is something I’ll admit I’m still learning to do in practice with other people, but I’ve mastered the first two for myself.
What I’ve learned in life thus far is that the people best equipped to take care of a team or a group are those who have cared for themselves and loved ones first. Not just in the best of times, but in the worst of times; through hardship. Truly life altering events someone might not think they could survive until they got to the other side and reflect back.
I’ve been there.
The above is the start of a book I’m writing about using the techniques I’ve specifically outlined to build a system capable of keeping a autistic engineer with schizophrenia on the rails and trending upwards in life.
I’m specifically implementing the techniques as a SaaS platform whose core is open-source because I need community. This post is the start of that community-building activity. In a more “creative” mode, I put together and narrated this slide deck about the project I call Blue that will serve as the foundation of the platform I’m building.
Blue’s still in trouble, but I have the direction now. I’m almost done implementing the initial version. It’s been slow going. I took a week of vacation to try finishing it only to realize I was off-course. That’s corrected now.
What I need, and why I’m writing this post and giving away 1/4 of my book early is to network with people in the Bay Area. I see things I cannot articulate and just need help learning to articulate them.
The autistic engineer with schizophrenia I’m desperately trying to keep on the rails is the same one doing the maintenance.
That’s where AI agents come in. I have a vision of agents that I’m laying out for the next wave of Blue and I want to try something different: Talking to people openly about what I’m working on as I work on it rather than struggling for years to produce visions that are locked behind a mental paywall.
I need to finish Blue’s first wave, first, though. For that I’m looking to network with people.
If you’ve read this far, that’s you.