3 Comments
User's avatar
ryan's avatar

thanks for this, this is excellent. it made me think aloud about bottlenecks on the way to realising this:

1. self-regulation and AI safety: here, concerns about ethics and potential harm it could cause could delay or stop more powerful versions of AI platforms rolling out. when this happens, how can one gauge this is good for us? as seen with recent slate of departures at OpenAI, this doesn't seem like this is happening

2. resource strain: as seen with the three mile island deal, this is going to eat up a lot of electricity, water, land & compute. For compute, there is NVIDIA & everyone else chasing after them, but are the utilities ready? what regulations need to go in order to free them up to expand capacity. is the investment being made? 

3. offline data acquisition: if it's in the cloud, the AI will sort it. but if its offline, how does it get into the cloud? this is where the cameras, microphones, data entry, sensors, etc all play a role, including the communications network and infrastructure. this helps a lot if there is satellite coverage and/or 5G, but if you have neither, what happens? so there is still a break between what happens offline in the real world to what can be put into the cloud

4. legislation: China is way ahead, I guess now the wait comes for the test cases as they loosen regulation and allow the companies to experiment more widely. Unfortunately I know little about AI regulation in the US. I guess they wait for something to break first, before they fix it? That usually doesn't end well. 

Expand full comment
Rahul Bhushan's avatar

Thanks for your thoughtful comment! You’ve highlighted some important bottlenecks that could impact AI’s future development. I agree that self-regulation and AI safety will play a critical role, and the challenges of infrastructure and data acquisition are very real. However, I’m optimistic that as we tackle these issues—particularly through innovation in computing and regulatory alignment—we’ll be able to unlock AI’s full potential.

Regarding legislation, it’s true that different regions are taking varied approaches, and China’s stance is certainly more aggressive. In the US and Europe, it seems like we’re leaning toward a more cautious, balanced approach, which has its pros and cons.

I think the key question is: How quickly can we adapt regulations and infrastructure to match the pace of AI development while ensuring safety and inclusivity? I’d love to hear your thoughts on that!

Expand full comment
ryan's avatar

am treading on eggshells here as this is not my area, here's my two cents:

my naive understanding of how America works is something has to blow up before political will becomes strong enough for anything to really get done (GFC, FTX, and if you go back long enough 9/11 and Pearl Harbour). so this is about mitigation, and policy agility to effect a rapid resolution or cleanup, rather than prevention (and this is a feature of a liberal, dynamic system, rather than a bug). this means strengthening the institutional framework and talent pool to draw upon so when the crisis comes, one is able to quickly judo it. therefore:

1. bipartisanship is a must have. when the time comes to act, you can't spend time arguing about what needs to be done. so having a committee where thoughtful debate is carried out in the background and a policy drawer to pull out when it is time to go is ideal

2. agility and implementation. this is about bridging the knowledge gaps between basic understanding, technical issues and implementation with the policy desire to act, the connection with industry has to be deeper and broader. you have these situations where icons such as Eric Schmidt (and from finance Hank Paulson) have made the switch from private sector to public, and one has to pre-solve the problem of the skill gap between regulators and private sector because the pay scales are different. so in the same way those that go into public office get to sell their stock tax free, this needs to be rolled out at a lower level, to more than just the guy at the top to fill the public sector bench, who would be able to keep up with the speed and ability of the private sector to explain, advocate, and implement. this learned pool of engineers, technicians, PMs etc doesn't have to be a big team, but it should be on standby feeding into the policy discussion, as well as be able to scale to the size of the problem when it comes

3. coalition (and expertise) of the willing. this is more of a nice to have, if the rest of the world weren't just bystanders and also have to brace for collateral damage. there are a lot of thoughtful, skilled foreign companies, governments and civil servants who would provide valuable input to 1. and 2., and this is a space where American leadership can organise and draw upon and influence best practices from elsewhere, and bring everyone closer together in partnership. it may even (it should) include China =P the most convincing argument I can think of is that if this blows up globally or puts a neighbour into shock, it ultimately hurts American interests, in the same way Clinton and Rubin intervened during the Mexican tequila crisis. so AI blowing up elsewhere could be one for America to come to the rescue, if not directly beneficial to American interests, indirectly as practice or a trial run for their own rescue ops, for when the time comes for them to deal with their own crisis

Expand full comment