New York, Times Square, and the New Normal of AI #
I just got back from about ten days of vacation between New York and Connecticut. It was my first time in Manhattan and I fell in love with it! Anyone who has visited knows its unique “organized chaos”: a constant flow of people, lights, ideas, and opportunities. Roaming the city, one thing especially caught my eye: Artificial Intelligence is everywhere, visible and explicit.
Not in keynotes or demos for insiders.
In everyday life.
AI has become part of the city’s urban and cultural landscape.
AI Among the Giants of Times Square #
Walking through Times Square, the world’s most iconic advertising plaza, alongside historic brands like Coca Cola, Samsung, and M&M’S, I saw an ad for Arize AI.
Arize is a California startup founded in 2020 that develops observability and monitoring platforms for AI models and LLM systems in production. We’re not talking about a social network, a consumer app, or a new tech gadget, but a deeply technical B2B product aimed at the enterprise world.
And that’s exactly the point.
It wasn’t just advertising. It was a symbolic change of status.
AI companies born just a few years ago are now occupying the same cultural and media spaces that for decades were reserved for global consumer brands.
It’s perhaps the clearest sign that AI has now left the tech niche to become mainstream.
Not only that: Anthropic, OpenAI, and other SaaS AI platforms were featured in ads and banners, both outdoors and in the subways. AI is no longer just a topic for insiders: it’s an integral part of New York’s urban and cultural fabric.
AI in Cafés and the Normality of Prompting #
In cafés—from Starbucks to Dunkin’, Blank Street to Gregorys—watching open laptops, I noticed a constant. Whether students, designers, journalists, or developers, everyone—sooner or later—interacted with an AI tool: Copilot in VSCode, Claude Code, ChatGPT, Gemini. It wasn’t the exception; it was the rule. AI has become the silent companion of those who work, study, and create.
The most interesting thing wasn’t seeing someone use ChatGPT. It was seeing how normal it all seemed.
No one was “trying out AI.”
AI was already integrated into the way people work, study, and code.
The ease with which you can glance at others’ screens deserves a separate article. What you see (and hear) on public transport and in cafés is often surprising—and sometimes worrying—for those who work in security.
A tip: use Privacy Screens!
A Cultural Gap #
On the subway, watching a guy iterate on a prompt in Claude, I remembered a conversation before I left. A colleague, just back from a week of training in the US, told me: “They’re at least six months ahead here. AI is already part of daily work life; in Europe, we’ll get there, but more slowly.”
My colleague’s impression is confirmed by the data, though the picture is more nuanced. While the US leads in infrastructure and frontier model development, it ranks only 24th globally for AI usage among the population (28.3%), surpassed by the UAE, Singapore, and Nordic European countries like Norway, Ireland, and France.1
The real gap emerges when looking at professional adoption: 41% of American workers use GenAI tools for professional activities,2 compared to about ~20% of EU companies—with an even greater gap between large enterprises (55%) and SMEs (17%).3
The difference isn’t so much in knowing how to use AI, but in the speed with which it’s incorporated into organizations’ daily processes.
This speed, in turn, is influenced by two deeply intertwined factors.
The first is cultural: in the US, adoption is often driven by a rapid experimentation mindset—a “try first, govern later” approach that accelerates spread but leaves many risks open. In Europe, companies and workers tend to have greater sensitivity to privacy, accountability, and risk management.
The second is regulatory: frameworks like the EU AI Act arise from this cultural context—not just as constraints, but as a response to a stronger demand for transparency and human oversight. In the short term, this slows down more spontaneous adoption. In the long run, however, it could become an advantage for organizations able to integrate AI, governance, and security sustainably from the start.
Behind the ease with which anyone opens an AI interface in a café, there’s a much less glamorous reality that companies are starting to discover: costs.
Every chat, code generation, or analysis performed by an LLM has a price. And many organizations are realizing that spending on tokens, inference, and AI services is growing much faster than expected.4
In recent months, several providers have started introducing more granular billing models and stricter limits. GitHub Copilot, for example, is moving toward increasingly usage-based logic,5 while Anthropic has changed how tokens are counted for Claude, with concrete impacts on enterprise costs.67
The narrative that “AI will automatically save money” is clashing with a more complex reality: without governance, optimization, and control, costs can rise very quickly.
We’ll discuss this in a dedicated article.
Chaotic Growth, Real Risks #
AI usage is growing at a dizzying pace, but often in a disorganized way. In many companies, adoption starts from small teams or individuals—what we call Shadow AI: AI tools adopted without IT supervision, without governance, often with access to sensitive data that no one has formally authorized.
CISOs struggle to map how many and which AI tools are actually in use, what data they access, and with what permissions. In the name of speed, developers are increasingly granting AI agents tokens, API keys, and credentials with extended—sometimes even global—privileges, without the same level of control they’d apply to a human or traditional service.
I analyzed this risk in detail in the article “Securing AI: Okta’s Blueprint for the Secure Agentic Enterprise”, but the key point is simple: the speed of adoption is real—and so are the risks it brings.
In New York, I also walked past the historic Ghostbusters firehouse. And in the end, Shadow AI is a lot like a ghost: invisible until it causes damage, hard to track, and often already inside the organization before anyone notices.
Who you gonna call? In the movie, you just called the Ghostbusters. In the enterprise world, unfortunately, you need something more.
The Role of Governance and Compliance #
In Europe, the situation is made more complex (and, in part, safer) by regulations like the EU AI Act, NIS2, and DORA. If these rules sometimes seem excessive, they’re designed to protect citizens and workers, and are often an antidote to uncontrolled chaos. The AI Act, in particular, imposes requirements for traceability, human oversight, responsibility, and transparency that force companies to treat AI agents as first-class identities.
I analyzed the impact of the EU AI Act on identity management in the article “EU AI Act Compliance: Addressing the Identity Layer”.
Checklist: How to Prepare for AI Governance #
- Map all AI tools in use (official and shadow) → Okta Universal Directory and ISPM can help discover and classify them.
- Define clear policies on who can use what and with which data → The various O4AA access patterns offer concrete models for managing permissions and scopes. In particular, XAA (Cross App Access) is designed for AI agents that need to interact with multiple systems.
- Implement access and audit controls for AI agents → Okta policies, logs, and Governance features (like Access Requests and Certification Campaigns) can help monitor and govern AI agents like any other critical identity.
- Train teams on AI risks, limits, and responsibilities.
- Constantly monitor costs and optimize token usage.
- Regularly update policies based on regulatory and technological changes.
Beyond the specific platforms or vendors chosen, the fundamental point is to start treating AI agents as real operational identities, with access, permissions, audit, and lifecycle to be governed like any other critical entity in the organization.
Platforms like Okta can help build this level of control and governance, but the challenge is first and foremost architectural and cultural: avoiding a situation where speed of adoption and control run on separate tracks.
Toward the Agentic Enterprise: The Maturity Challenge #
The real challenge isn’t adopting AI—that battle is already won, as the data and what I saw in New York show. The challenge is to do it safely, with governance, and sustainably over time.
For years, we’ve treated software tools as simple applications. AI agents completely change the paradigm: they make decisions, execute workflows, access systems, and interact with sensitive data.
Concretely, this means knowing which AI agents operate in your organization, with which identities, what permissions, and access to which data. It means treating every AI agent as a first-class identity—not just a tool, but an actor with credentials, scope, and lifecycle to manage. This is where identity management returns to center stage.
The O4AA (Okta for AI Agents) framework and the Blueprint were created precisely for this need: to offer concrete access patterns for those building or governing agentic systems.
Want to understand how to choose the right access pattern for your AI agents? Read the article “Okta for AI Agents: Access Patterns”.
Conclusions: AI Is Here to Stay. Are You Ready? #
AI is no longer a promise: it’s daily reality, visible in the world’s innovation hotspots. But its adoption brings new risks, requiring governance, awareness, and the right tools. In Europe, the path is slower but—perhaps—safer.
In New York, I had the very concrete feeling that AI has already passed the cultural point of no return. It’s no longer a “special” tool: it’s becoming the invisible infrastructure of daily work.
The real question now isn’t whether to adopt it. It’s how quickly we’ll be able to govern it before it outpaces the processes, policies, and security models built over the last twenty years.
Let me know your thoughts: have you seen signs of this revolution too? How are you tackling the challenge of AI agent governance? Write in the comments or on LinkedIn!
-
Microsoft AI Economy Institute, “Global AI Adoption in 2025”, January 2026. ↩︎
-
Alexander Bick et al., “Mind the Gap: AI Adoption in Europe and the U.S.”, Federal Reserve Bank of St. Louis / Brookings Papers on Economic Activity, March-April 2026. ↩︎
-
Alice Labs, “Global AI Adoption Index 2026”, April 2026. EU enterprise AI use: 19.95% (2025), with a clear gap between large enterprises (55%) and small ones (17%). ↩︎
-
“What I spent in all of 2023, I now spend in a week.” – a16z – How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025, February 2026. ↩︎
-
GitHub Blog – Copilot is moving to usage-based billing, April 2026. ↩︎
-
Anthropic – Claude API Pricing. Analysis: Finout.io – Claude Opus 4.7 Pricing, April 2026. Confirmation: Let’s Data Science – Claude Generates High Token Usage, April 2026. ↩︎
-
IT Brief – Anthropic shifts enterprise billing to token-based pricing, April 2026. ↩︎