Product

Solutions

Security

Community

Company

Databases are Dead (along with all the security they provided). Long live AI!

Databases are Dead (along with all the security they provided). Long live AI!

Written by

Ron Williams

Future
5 min

For the last seventy years, databases have acted as the foundational piece of the tech stack.

I call these years the Database Era. Every application and device you use today is built on some kind of database that organizes your information, and enforces permissions around who can read, store, change, or delete your data.

This era began with data storage research in the 1950’s, before IBM introduced the tool to the largest companies in the late 1960’s. Then in 1977, Larry Elison left IBM and brought databases to the mainstream business world through his company, Oracle. Other commercial database vendors followed and then by the end of 2000’s open source databases started dominating the market peaking in the last few years with some huge public database companies now the darlings of Wall Street.

Today, I think of databases as the silent unsung heroes of data security. Their security and compliance offerings are far more complex than most people understand, providing the exact data permissions and tracking we need for the applications, APIs, and infrastructure that keep our companies running.

It’s obvious to me that we’re no longer in the Database Era. Instead, we’re in the AI Era.

The AI Era began on August 10, 2022, when the open source community gained access to the code for the text to image AI called Stable Diffusion (happy anniversary!) creating a flurry of awareness that the AI ivory towers had fallen and advances were now in the hands of the general developer ecosystem. Awareness and reach of the power of AI was then sped up more by the launch of ChatGPT on November 30, 2022. What has followed is a wave of AI advancement, an explosion of generative AI, and the rapid democratization of one of humanity’s most powerful technologies.

The impact of this fast growth has been felt across all of our industries.  Educators are worried about students turning in work done by AI. Employers are worried about confidential information going into AI models and how to track what happens after. Lawyers are worried about AI’s misappropriation of IP rights and data privacy compliance. Governments are discussing regulation. Hollywood is striking and their signs all point to AI.  Everyone, it seems, is talking about how AI might transform their work, companies, and economies.

Personally, as a former IT executive and Chief Security Officer, I’m thinking about how centralized management, security, and compliance will exist in this new AI era.

Back in 2013, I was working at Riot Games when our employees started asking about using a new messaging service called Slack. As a leading entertainment company, Riot’s business relied on surprising our audiences. If our employees wanted to use Slack, I had to assess how the Slack data was handled, stored, and secured.

Fortunately, Slack was built at the time on a popular open source database called MySQL. The database relied on decades of proven database theory, data security engineering, and in-depth compliance capabilities protecting user data. It was an open source data tool built with enterprises like Riot Games in mind. Today, Slack is one of several dozen SaaS apps now being used by Riot, each dependent on their own database.

As we embark on this new AI era, we have to remember that AI models are not databases. They are permission-less, hard-to-understand, huge lakes of data.

Since August 2022, VCs have written checks into over 700 AI startups, most are now pitching their products to enterprises. The problem is, everyone building and funding these AI products is acting like we’re still in the Database Era and that a key set of data management, security, and compliance capabilities exist. The truth is, AI has none of these.

Let me explain. Think of AI as a data lake managed by a super-smart friendly intern. When you ask the AI a question, the intern will want to complete the task at hand. The issue is, the intern doesn’t know the difference between what information you’re allowed to have, and what you’re not allowed to have. This means an employee might use a new AI tool with access to company data to search how much a colleague is making, or read their manager’s performance review. Anything the AI has in it will happily be given over to anyone who shows up at the shores of the data lake.

Nothing within the AI model knows what is allowed with the data that the model receives. Nothing will stop the AI from handing company data out, and nothing records what happens with the data, or when. If you’re a business with any kind of compliance burden, you legally can’t even use the newest AI models (and probably 99% of the startups trying to sell AI to you), because you have no oversight. Even when it comes to outputs, AI companies can’t tell you if their AI created something or if a human did. It’s these risks that led many legal departments of the biggest companies in the world to ban products like ChatGPT for their employees.

New AI tool vendors selling into businesses have a lot of data security, user auditing, and compliance controls to build, important things that were taken for granted when startups were building in the database era. Bringing in any AI tools in the market today, means that your data in it is on its own.

As we accept the end of the Database Era, we have to rethink how data is being created, accessed, used, and managed by AI applications.

It’s clear that in the near future, knowledge workers will no longer need most of even the largest SaaS apps they rely on today. Instead, they’ll have purpose built AIs to work directly with. This means that big on-premise and cloud database companies who have built a business of structuring and protecting data for everyone's software stack will no longer be the source of our data structures.

I expect a few of the largest AI players will rush out new security features and spend years trying to properly implement decades of database style security and data management capabilities. These efforts will still lack centralized and standardized controls for all the other AI models handling data in a company. This problem will not easily be solved in the AI itself.

If you are wondering how to bring the most powerful commercial and open source AI capabilities to every knowledge worker in a secure, centrally managed, and compliant environment - please feel free to reach out. Our team of enterprise AI, security, and infrastructure leaders from OpenAI, Google, Bloomberg, Robust Intelligence, Intuit, Square, Clover Health, Bird, and Riot Games are ready to help you move confidently into this new AI Era. Some of us are in Las Vegas this week during the Black Hat conference let us know if you would like to grab a coffee, chat about AI or just expand your network.

Elevate your work

Supercharge your apps and data with fully integrated AI, utilizing the best and any AI model available

Analyze

Learn

Summarize

Brainstorm

Elevate your work

Supercharge your apps and data with fully integrated AI, utilizing the best and any AI model available

Analyze

Summarize

Elevate your work

Supercharge your apps and data with fully integrated AI, utilizing the best and any AI model available

Learn

Summarize

Brainstorm