Two Essential Ecosystems In Our Country Vastly Misunderstand Each Other — At a Terrible Price

Why the Gap Between Silicon Valley and Washington D.C. Must Be Bridged, and How To Do So Effectively

Carolyn Wang
7 min readJun 24, 2024
Capitol Hill // Photo by Carolyn Wang

This summer was the very first time I truly “stepped out” of Silicon Valley—landing myself in the nation’s capital. Never before have I felt so strongly about the need for a straddling of two worlds: (1) Silicon Valley and (2) Washington D.C.

Now to preface: yes, technically, I’ve physically “stepped out of Silicon Valley.” But I was born in the bay area, raised in the bay area, and now attend college in … you guessed it. The bay area. My parents work in tech or tech-adjacent careers, my friends’ parents work in tech or tech-adjacent careers, a vast majority of my friends aspire to work in tech or tech-adjacent careers, I’m independently pursuing a Computer Science major out of personal interest … AND you get the picture. For my entire life, I grew up mere minutes away from the biggest tech giants in the world. Yet I decided — after a decades-long love for writing, a four-year stint in high school journalism, a happy year in my AP Government & Politics class, a noteworthy Atlantic article, and a wonderful experience running a tech policy fellowship — that I would take a chance and come out to D.C. after my first year of college.

I’m currently lucky enough to work under two brilliant, technologically-savvy mentors (technically and discourse-wise) at The Brookings Institution, one of the most prominent, well-respected think tanks in the world on many fronts — including AI governance and innovation. Yet despite this bubble of wildly smart and inspiring individuals I’ve been blessed to work with, I’ve observed a broader culture of technological alarmism across DC that raises vibrant red flags in my mind for what I assumed Washington got covered: How to adapt quickly to the “AI hype wave” that has overtaken much of the country.

And I think I’ve discovered the fundamental issue: The disconnect between Silicon Valley and Washington D.C. is faring into a rat race of fear-mongering and vague legislative proclamations that is, at its core, counterproductive for both sides of the country.

Part of the work I’m doing at Brookings AIET this summer is investigating AI’s role in elections and legislation through both qualitative and quantitative means — which includes hours of reading through roadmaps, memos, trackers, and bills — alongside some coding and number-crunching. What I’ve realized, amidst briefs, memos, and talks titled “AI + XYZ,” is that there’s a whole lot of fear-mongering and black box abstraction. What, even, is “AI?” It’s such a broad term that dates back to the 1950s, and covers vast fields wholly unrelated to what many bills, executive orders, and talks featuring non-technical C-level executives have referenced. Is the discourse really about “AI,” or a very specific, unsupervised machine learning model being deployed that, at its core, is just a very small subfield of the vast umbrella computer scientists call “AI?” And so what, if there’s potential for algorithmic bias and unfairness on abstract, legislative terms, if policymakers and elected officials don’t understand what it means to tune hyperparameters, or train models, or operate on garbage-in, garbage-out principles?

One specific example of this disconnect, having analyzed it very recently for a briefing, is the AI roadmap that was released this May by the Bipartisan AI Working Group — led by Senator Majority Leader Chuck Schumer and the “Gang of Four.” As lovely-worded and intricately-articulated as the roadmap was, the document’s ultimate purpose was confusing at best.

I don’t critique out of malice, of course; But what I am inquiring about is the fact that after all those insight forums, all those recommendations from various stakeholders, and all that time drafting up the roadmap, everything in its pages still sounded so … abstract. Which was probably why organizations like TechNet (a network of tech CEOs) and Chambers of Progress (a trade group representing tech companies on antitrust, self-driving cars, and other issues) endorsed it so vigorously.

Replace “AI” and all the tech-specific organizations in the roadmap after the action verbs “Fund XYZ,” “Develop legislation surrounding XYZ,” “Create regulations around XYZ,” and “Consult workers about XYZ,” and the roadmap would still make sense.

In other words, there was no concrete specificity. The “Gang of Four’s” document didn’t amount to any tangible impact.

And I don’t blame them.

It’s simply hard to create effective, complementary policy that guides tech innovation in a responsible yet innovative direction, especially in the newest generative AI space, when policymakers are tapping the glass from the outside, passively observing and reacting to systems they don’t fully understand.

Meanwhile, a large portion of the individuals in Silicon Valley who guide and direct the aforementioned technical research, or spur innovation in industry, are so knee-deep into specific domains and so high-up technically that they are either unaware of what’s happening legislatively, indifferent to what’s happening legislatively, or simply too busy to care about what’s happening on the other side of the country and why federal, state and local legislation might be relevant to the broader population. The other explanation — of course — is that higher-ups in Silicon Valley know exactly what’s going on in Washington — and they’re not moving a finger because bad legislation is the best avenue forward for self-regulation.

Having grown up in the bay and now stepping out of it, I’ve realized that I’ve taken a lot of technological awareness for granted. A good majority of people I know code, whether they are CS majors or not; It’s a staple of education here, starting as young as elementary or middle school, and the broader environment defines it too. Data science techniques are integrated into ecosystems across industries. UC Berkeley’s three majors in the newest College of Computing, Data Science, & Society: Computer Science, Data Science, and Statistics, are amongst the most popular majors at Cal — the alma mater of Apple founder Steve Wozniak and #1 CS school tied with Carnegie Mellon, Stanford and MIT.

It’s not just words, long documents, roadmaps, memos, and passionate, persuasive speeches that hold power in Silicon Valley. Numbers, data, quantitative smarts, sparks of imagination and ideas, technical competence, and perhaps the occasional Leetcode grind — that’s what holds power and respect in the tech industry. And that’s what legislators don’t see.

One question I’ve asked myself constantly throughout my time in D.C. is: Do I want to be a policymaker that cares deeply about tech, or a technologist that cares deeply about policy? From what I’ve observed so far, the latter seems like the better option. It’s much easier to start from the inside and dig out, rather than shovel haphazardly from the outside in — only to realize that your target has already burrowed itself ten meters ahead of you.

Yet even for those with the intent of becoming computer scientists who want to be heavily involved in the policy world — there lies another fundamental problem: While there’s been burgeoning signs of computational, data science-oriented methods taking hold at everyday research settings across think tanks, nonprofit organizations, and the broader D.C. community, I see little to no such interest in the opposite direction.

There needs to be more opportunities to engage technologists in the conversation of legislation. Of course, not every CS person cares to become a policymaker. But there certainly are people with brilliant thoughts and conversable ideas who simply are not tapping into their potential in the policy space because there is no clear avenue to engage in such topic matters at an accessible scale in the valley.

I know for a fact that engaging everyday technologists in the legislative questions surrounding AI will be effective, because for the few technologists who ARE engaging in AI policy discourse, the difference is stark.

A few months ago, I attended a symposium hosted by UC Berkeley’s Center for Long-Term Cybersecurity and CITRIS Policy Lab, where AI Policy Hub Fellows — groups of graduate students in the EECS Department and Information School at UC Berkeley—presented their findings after a year-long research fellowship exploring the intersection of technical research and policy. One project that I still remember vividly involved a graduate student using Zero Knowledge Proofs —a cryptography method that enables one party to prove the validity of a statement to another party without revealing any information — to address issues related to data privacy and cybersecurity.

I found it brilliant, because it was tangible. Zero Knowledge Proof applications not only made sense theoretically, but they were actionable and concrete concepts. How someone could afford to apply a purely mathematical concept to policy lay in the fact that they were within the inner-circles of AI research, knew what they were dealing with, and could come up with solutions technically adept in that manner.

This goes to show that some broader infrastructure to inform the tech world’s rank and file of policy implications, and give everyday employees the opportunity to respond and engage, is drastically needed. And powerful.

Legislation surrounding AI needs to be democratized; the everyday tech worker who contributes to the newest AI systems should have a sure-fire way to contribute their voices to the legislative discussions surrounding AI and emerging tech, especially since these discourses are currently being waged by a very few — many of whom lack the background and technical knowledge to act with effectiveness.

Bridging this divide between Washington and Silicon Valley is the key to creating good, solid legislation — one that doesn’t stifle, but complements and encourages innovation. Without it, it is very hard for anyone to adopt effective policy, spur innovation, and ouster the fear that has permeated the depths of communities outside the Silicon Valley bubble.

Our solution must start with a flow of discourse between two ecosystems that still remain vastly disconnected today. If there are no efforts to actively engage everyday workers from both sides into the conversation, our country will pay a terrible price for this lack of foresight.

--

--

Carolyn Wang

CS + PPL @ UC Berkeley. Writer, musician, triathlete, & explorer. More about me: carolynwangjy.medium.com/ae3eb5de2324