Acuvity Raises $9 Million Seed Funding for Gen-AI Governance and In-house Development

1 week ago 7
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Sunnyvale, CA-based startup Acuvity has emerged from stealth with $9 million seed funding to progress its secure AI adoption platform.

The seed funding was led by Foundation Capital with participation from individual investors including Basil Alwan and Sri Reddy. Satyam Sinha (formerly co-founder and VP engineering at Aporeto before it was acquired by Palo Alto Networks in late 2019) is now co-founder and CEO at Acuvity.

Gen-AI is rapidly becoming a foundational productivity tool for employees. Its ability to answer questions, generate grammatically correct text, summarize complex reports, convert spreadsheets to presentations, and more, within minutes rather than hours or days, guarantees increasing, and almost standard use by busy staff. The problem is that it is not necessarily understood nor used securely. 

This is an issue with almost all new technology as it breaks into mainstream – a lack of visibility into its processes. And this is the issue being tackled by Acuvity. “Most companies don’t have visibility into how their employees are using gen-AI platforms nor into all the risks to which they are exposed,” says Sinha. “Employees may be accessing them against policy, leaking data inadvertently or proliferating risky AI content into the enterprise.”

The number of available gen-AI systems is continually growing, And the number of different models available from the major systems is also expanding. These systems and models will have their own ‘terms of use’ – sometimes the data input or uploaded into the system is destroyed after use, and sometimes it is retained for further model training.

Users are rarely aware of such fine distinctions – their drive is simply to work better and faster. But the effect is the potential leakage of personal or proprietary information, and the possible import of false information. The company has little visibility into which employees are sharing what information with which gen-AI systems. Providing that visibility is a primary purpose of Activity. 

Externally, it continuously scans the internet for new AI offerings (and risky AI models can be blocked). Internally, it scans communication with acceptable models. It has the ability to detect potentially worrisome content: should you be including that sort of PII or revenue data in your gen-AI prompts? 

It doesn’t block the prompt, but flags the communication in the same way that Google Mail warns its users to be careful when emailing an unknown person. The user can ignore the warning – but in this case a dashboard alerts the security team who can take appropriate action by delivering better AI education or specific disciplinary action where necessary.

Advertisement. Scroll to continue reading.

The Acuvity platform goes beyond providing visibility into users’ use of gen-AI – it also helps the development of secure in-house gen-AI applications. “Acuvity’s platform not only governs employee accessing gen-AI, it also helps companies build gen-AI applications securely without slowing innovation,” explains the company. “It creates a separation of concerns and speeds up AI application development by making AI security ‘pluggable’, with no code changes required once deployed.”

The increased adoption of gen-AI is inevitable. Staff rarely have bad intentions, but are driven by a desire to work better and faster, and be more valuable to their employer. But they do not always understand the possible effects of new technology. It’s not a new problem – we’ve been here before with Shadow IT. Now we have Shadow AI. Acuvity’s purpose is to shine the light of visibility – and governance – into this new shadow.

Related: Shadow AI – Should I be Worried?

Related: Patented.ai Raises $4 Million for AI Data Privacy Solution

Related: Insider Q&A: CIA’s Chief Technologist’s Cautious Embrace of Generative AI

Related: Japan’s Kishida Unveils a Framework for Global Regulation of Generative AI

Read Entire Article