Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    235,317 followers

    𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗺𝗶𝘀𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗼𝗼𝗱 𝘁𝗼𝗽𝗶𝗰𝘀 𝗶𝗻 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲. Because most people explain it from the inside out: policies, councils, standards, stewardship. But the business does not buy any of that. The business buys outcomes: → trustworthy KPIs → vendor and partner data you can actually use → faster financial close → fewer reporting escalations → smoother M&A integration → AI you can deploy without creating risk debt Most AI programs fail for boring reasons: nobody owns the data, quality is unknown, access is messy, accountability is missing. 𝗦𝗼 𝗹𝗲𝘁’𝘀 𝘀𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝗶𝘁. 𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗳𝗼𝘂𝗿 𝘁𝗵𝗶𝗻𝗴𝘀: → ownership → quality → access → accountability 𝗔𝗻𝗱 𝗶𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘃𝗲𝗿𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 𝗶𝗻 𝟰 𝗹𝗮𝘆𝗲𝗿𝘀: 1. Data Products (what the business consumes) → a named dataset with an owner and SLA → clear definitions + metric logic → documented inputs/outputs and intended use → discoverable in a catalog → versioned so changes don’t break reporting 2. Data Management (how products stay reliable) → quality rules + monitoring (freshness, completeness, accuracy) → lineage (where it came from, where it’s used) → master/reference data alignment → metadata management (business + technical) → access controls and retention rules 3. Data Governance (who decides, who is accountable) → data ownership model (domain owners, stewards) → decision rights: who can change KPI definitions, thresholds, and sources → issue management: triage, escalation paths, resolution SLAs → policy enforcement: what’s mandatory vs optional → risk and compliance alignment (auditability, approvals) 4. Data Operating Model (how you scale across the enterprise) → domain-based setup (data mesh or not, but clear domains) → operating cadence: weekly issue review, monthly KPI governance, quarterly standards → stewardship at scale (roles, capacity, incentives) → cross-domain decision-making for shared metrics → enablement: templates, playbooks, tooling support If you want to start fast: Pick the 10 metrics that run the business. Assign an owner. Define decision rights + escalation. Then build the data products around them. ↓ 𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝗮𝘀 𝗔𝗜 𝗿𝗲𝘀𝗵𝗮𝗽𝗲𝘀 𝘄𝗼𝗿𝗸 𝗮𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀, 𝘆𝗼𝘂 𝘄𝗶𝗹𝗹 𝗴𝗲𝘁 𝗮 𝗹𝗼𝘁 𝗼𝗳 𝘃𝗮𝗹𝘂𝗲 𝗳𝗿𝗼𝗺 𝗺𝘆 𝗳𝗿𝗲𝗲 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dbf74Y9E

  • View profile for Rock Lambros
    Rock Lambros Rock Lambros is an Influencer

    Securing Agentic AI @ Zenity | Cybersecurity | CxO, Startup, PE & VC Advisor | Executive & Board Member | CISO | CAIO | QTE | AIGP | Author | OWASP AI Exchange, GenAI & Agentic AI | Tiki Tribe Founding Member

    19,948 followers

    AI security/securing the use of AI is going to kill me. I use Claude Code almost daily. It's a problem.... Here's what I have to change AGAIN this week. Security researcher Ari Marzuk disclosed 30+ vulnerabilities across AI coding tools. Cursor. GitHub Copilot. Windsurf. Claude Code. All of them. He called it IDEsaster. The attack chain includes prompt injection, hijacking LLM context, and auto-approved tool calls executing without permission. Then, legitimate IDE features are weaponized for data exfiltration and RCE. Your .env files. Your API keys. Your source code. Accessible through features you thought were safe. Most studies I read claim that around 85% of developers now use AI coding tools daily. Most have no idea their IDE treats its own features as inherently trusted. 𝗦𝗼... 𝗮𝗳𝘁𝗲𝗿 𝗿𝗲𝘃𝗶𝗲𝘄𝗶𝗻𝗴 𝗔𝗿𝗶'𝘀 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵, 𝗵𝗲𝗿𝗲'𝘀 𝗜 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗱𝗼𝗶𝗻𝗴... Be warned: All this is SO much easier said than done! Audit every MCP server connection. Checked for tool poisoning vectors where legitimate tools might parse attacker-controlled input from GitHub PRs or web content. Removed servers I couldn't verify. Disabled auto-approve for file writes. The attack chains weaponize configuration files and project instructions like .claude/settings.json and CLAUDE.md. One malicious write to these files can alter agent behavior or achieve code execution without additional user interaction. Move all credentials to a secrets manager. No .gitignored .env files in agent-accessible directories. API keys live in 1Password CLI. Environment variables inject at runtime through a wrapper script the LLM never sees. Start running Claude Code in isolated containers. Mounted volumes limited to specific project directories. No access to ~/.ssh, ~/.aws, or ~/.config. If the agent gets compromised, blast radius stays contained. Enable all security warnings. Claude Code added explicit warnings for JSON schema exfiltration and settings file modifications. These exist because Anthropic knows the attack surface. Add pre-commit hooks for hidden characters. Prompt injections hide in pasted URLs, READMEs, and file names using invisible Unicode. Flag non-ASCII characters in any file the agent might ingest. The fix isn't to stop using AI coding tools. The fix is to stop trusting them implicitly. What controls do you have for AI tools with write access to your codebase? 👉 Follow for more AI and cybersecurity insights with the occasional rant #AISecurity #DevSecOps

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,518,259 followers

    Yesterday I had one of those rare conversations that stays with you. I sat down with Dr. Rebecca Hinds, PhD from Glean to unpack her new research, The AI Transformation 100, and it completely reframed how I think about AI in organizations. In the article I just published, I share the insights that hit me hardest — because they match exactly what I see with executives and teams every day. Here’s what you’ll learn: 🔸80% of AI transformation is just… transformation. The same human issues, politics, and leadership gaps — simply amplified by AI. 🔸AI is a megaphone. Healthy cultures accelerate. Broken workflows break faster. Rebecca’s data makes this painfully clear. 🔸Leadership behavior is the biggest adoption driver. If leaders don’t use AI themselves, their teams won’t either. 🔸100 practical ideas from 100+ leaders. Not hype — real moves happening right now. This is one of the most grounded and useful reports I’ve come across. The AI Transformation 100 releases today and I strongly encourage every business leader to read it. Access the full report and join the conversation: https://lnkd.in/e7YYwBrt And if you want the deeper story, don’t miss my full interview with Rebecca — you’ll want to watch it to the end: https://lnkd.in/egsdhVaA #GleanAmbassador #AITransformation #Leadership #ArtificialIntelligence #DigitalTransformation #ChangeManagement #FutureOfWork #BusinessStrategy #Innovation

  • View profile for Marc Benioff
    Marc Benioff Marc Benioff is an Influencer
    241,078 followers

    The Agentic Enterprise is driving profound change across every industry, but nowhere are the stakes higher than in healthcare. There is an incredible opportunity to elevate the work of healthcare professionals and deliver stronger care for patients around the world. In an essay for TIME, Murali Doraiswamy, professor of medicine at Duke University, and I discuss how AI is revolutionizing medicine, including: • Flagging subtle abnormalities in scans and slides that a human eye might miss. • Speeding up the discovery of drugs and drug targets. • Providing patients faster and more personalized support, from scheduling to flagging side effects But we’ve also seen that over-reliance on AI can lead to “deskilling” — in which medical professionals become less effective. That underscores the importance of approaches that keep humans at the center, such as the Intelligent Choice Architecture (ICA), where AI systems don’t make decisions but nudge providers to take a second look at results, weigh alternatives, and stay actively engaged in the process. The future of work is humans and AI agents working together. If we commit to designing systems that sharpen our abilities, we can combine the promise of AI with the critical thinking, compassion, and real-world judgment that only humans bring. https://lnkd.in/gqkTUfb6

  • View profile for Martyn Redstone

    Head of Responsible AI & Industry Engagement @ Warden AI | Ethical AI • AI Bias Audit • AI Policy • Workforce AI Literacy | UK • Europe • Middle East • Asia • ANZ • USA

    21,027 followers

    Three AI recruiters look at the same 109 CVs. They agree only 14% of the time. That’s not the start of a joke. And that's not efficiency. That’s what I call 'Rank Roulette'. When I tested ChatGPT, Gemini and Grok against the same job spec and anonymised CV set, here’s what happened: • 14% overlap in shortlists → Four times out of five, the models disagreed. • ±2.5 places volatility → Yesterday’s #2 became today’s #5. • 55% of CVs never surfaced → Candidates vanished with no audit trail. • 96% recycled rationales → Fluent, but shallow logic. We’re told by vendors and in-house 'tinkerers' that LLMs can “shortlist in seconds”. The truth: they behave more like over-confident interns - smooth on the surface, but shockingly inconsistent. And the worst part? It’s not even random. In a follow-up piece, I explored why this happens: a technical quirk called batch non-determinism. In plain English: your candidate’s fate changes depending on what else the server was processing at that moment. Until volatility is tamed, hands-off AI screening with LLMs is more than risky. It’s completely unexplainable, indefensible and a governance nightmare. Go to the comments for 👉 Full research 👉 Follow-up on why AI recruiters play favourites

  • View profile for Lenny Rachitsky
    Lenny Rachitsky Lenny Rachitsky is an Influencer

    Deeply researched no-nonsense product, growth, and career advice

    343,787 followers

    My biggest takeaways from Ethan Smith on how to win at AEO (i.e. get ChatGPT to recommend your product): 1. Being mentioned most often beats ranking first. In Google, the #1 blue link wins. In ChatGPT, the answer summarizes multiple sources—so appearing in five citations beats ranking #1 in one. Ethan’s strategy: get mentioned on Reddit, YouTube, blogs, and affiliates. Volume of mentions matters more than any single placement. 2. LLM traffic converts 6x better than Google search traffic. Webflow saw this dramatic difference because users who come through AI assistants have built up much more intent through conversation and follow-up questions, making them highly qualified leads. 3. Early-stage startups can win at AEO immediately, unlike with SEO. Traditional SEO requires years of domain authority. But a brand-new Y Combinator company mentioned in a Reddit thread today can show up in ChatGPT tomorrow. The playing field is finally level. 4. The long tail of AEO is 4x bigger than SEO. People ask ChatGPT questions with 25 or more words (vs. 6 in Google). Ethan found gold in queries like “Which meeting transcription tool integrates with Looker via Zapier to BigQuery?”—questions that never existed in search but are perfect for AI. Own these micro-niches. 5. Reddit is proving to be the kingmaker for AI visibility. ChatGPT trusts Reddit because the community polices spam better than any algorithm. Ethan’s exact playbook: make one real account, say who you are and where you work, give genuinely helpful answers. Five good comments can transform your visibility. No automation, no fake accounts—just be helpful. 6. YouTube videos for “boring” B2B terms are a gold mine for AEO. Nobody makes videos about “AI-powered payment processing APIs”—which is exactly why you should. While everyone fights over “best CRM software,” the high-value, zero-competition long tail is wide open in video. 7. Your help center is now a growth channel. All those “Does your product do X?” questions flooding ChatGPT can be answered by help-center pages. Move them from subdomain to subdirectory, cross-link aggressively, and cover every feature question. Ethan calls this the most underutilized opportunity in AEO. 8. January 2025 was the inflection point in AEO growth. That’s when ChatGPT made answers more clickable (maps, shopping cards, citations) and adoption exploded. Webflow went from near zero to 8% of signups from AI. This channel is accelerating faster than any Ethan’s seen in 18 years. 9. The AEO playbook: (1) Find questions from competitor paid search data, (2) set up answer tracking, (3) see who’s showing up as citations, (4) create landing pages answering all follow-up questions, (5) get mentioned offsite via Reddit/YouTube/affiliates, (6) run controlled experiments, (7) build a dedicated team. This exact process is driving real results at scale.

  • View profile for Abby Hopper
    Abby Hopper Abby Hopper is an Influencer

    Former President & CEO, Solar Energy Industries Association

    73,340 followers

    Something VERY cool just happened in California and… it could be the future of energy.   On July 29, just as the sun was setting, California’s electric grid was reaching peak demand.   However, instead of ramping up fossil fuel resources, the California Independent System Operator (CAISO) and local utilities decided to lean on a network of thousands of home batteries.   More than 100,000 residential battery systems (made up primarily by Sunrun and Tesla customers) delivered about 535 megawatts of power to California’s grid right as demand peaked, visibly reducing net load (as shown in the graphic).   Now, this may not seem like a lot but 535 megawatts is enough to power more than half of the city of San Francisco and that can make all the difference when a grid is under stress.   This is what’s called a Virtual Power Plant or VPP. It’s a network of distributed energy resources that grid operators can call on in an emergency to provide greater resilience to our energy systems. Homeowners are compensated for the dispatch, grid operators are given another tool for reliability, and ratepayers are saved from instability. It’s a win-win-win.   Now, this was just a test to prepare for other need-based dispatches during heat waves in August and September. But it’ historic.   As homeowners add more solar and storage resources, the impact of these dispatch events will become even more profound and even more necessary. This was the second time this summer that VPPs have been dispatched in California and I expect to see even more as this technology improves.   Shout out to Sunrun, Tesla, and all companies who participated. Keep up the great work.

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    77,520 followers

    This week MIT dropped a stat engineered to go viral: 95% of enterprise GenAI pilots are failing. Markets, predictably, had a minor existential crisis. Pundits whispered the B-word (“bubble”), traders rotated into defensive stocks, and your colleague forwarded you a link with “is AI overhyped???” in the subject line. Let’s be clear: the 95% failure rate isn’t a caution against AI. It’s a mirror held up to how deeply ossified enterprises are. Two truths can coexist: (1) The tech is very real. (2) Most companies are hilariously bad at deploying it. If you’re a startup, AI feels like a superpower. No legacy systems. No 17-step approval chains. No legal team asking whether ChatGPT has been “SOC2-audited.” You ship. You iterate. You win. If you’re an enterprise, your org chart looks like a game of Twister and your workflows were last updated when Friendswas still airing. You don’t need a better model - you need a cultural lobotomy. This isn’t an “AI bubble” popping. It’s the adoption lag every platform shift goes through. - Cloud in the 2010s: Endless proofs of concept before actual transformation. - Mobile in the 2000s: Enterprises thought an iPhone app was strategy. Spoiler: it wasn’t. - Internet in the 90s: Half of Fortune 500 CEOs declared “this is just a fad.” Some of those companies no longer exist. History rhymes. The lag isn’t a bug; it’s the default setting. Buried beneath the viral 95% headline are 3 lessons enterprises can actually use: ▪️ Back-office > front-office. The biggest ROI comes from back-office automation - finance ops, procurement, claims processing - yet over half of AI dollars go into sales and marketing. The treasure’s just buried in a different part of the org chart. ▪️Buy > build. Success rates hit ~67% when companies buy or partner with vendors. DIY attempts succeed a third as often. Unless it’s literally your full-time job to stay current on model architecture, you’ll fall behind. Your engineers don’t need to reinvent an LLM-powered wheel; they need to build where you’re actually differentiated. ▪️Integration > innovation. Pilots flop not because AI “doesn’t work,” but because enterprises don’t know how to weave it into workflows. The “learning gap” is the real killer. Spend as much energy on change management, process design, and user training as you do on the tool itself. Without redesigning processes, “AI adoption” is just a Peloton bought in January and used as a coat rack by March. You didn’t fail at fitness; you failed at follow-through. In five years, GenAI will be as invisible - and indispensable - as cloud is today. The difference between the winners and the laggards won’t be access to models, but the courage to rip up processes and rebuild them. The “95% failure” stat doesn’t mean AI is snake oil. It means enterprises are in Year 1 of a 10-year adoption curve. The market just confused growing pains for terminal illness.

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (91,000+ subscribers), Mother of 3

    126,854 followers

    🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.

  • View profile for Alex Wang
    Alex Wang Alex Wang is an Influencer

    Learn AI Together - I share my learning journey into AI & Data Science here, 90% buzzword-free. Follow me and let's grow together!

    1,125,941 followers

    By 2030, we’ll see 92 million jobs lost, and 𝟏𝟕𝟎 𝐦𝐢𝐥𝐥𝐢𝐨𝐧 jobs created, according to the latest WEF report. We’re heading toward a global churn of 22% of current jobs by then. And many of the new ones? They’re being shaped and accelerated by AI. Some of the fastest-growing roles globally, directly driven by AI adoption, include: - AI and Machine Learning Specialists - Big Data Analysts - AI-augmented UX Designers - Information Security Analysts - Fintech Engineers - Process Automation Specialists ... Many of these roles barely existed at scale just a few years ago. And they’re not all technical. We’re also seeing roles like prompt engineers, AI ethics leads, and AI product strategists gaining traction across different industries. 𝐎𝐧𝐞 𝐬𝐡𝐢𝐟𝐭 𝐭𝐡𝐚𝐭’𝐬 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐦𝐨𝐫𝐞 𝐯𝐢𝐬𝐢𝐛𝐥𝐞 𝐧𝐨𝐰 𝐢𝐬 𝐭𝐡𝐞 𝐫𝐢𝐬𝐞 𝐨𝐟 𝐀𝐈 𝐚𝐠𝐞𝐧𝐭𝐬. We’re moving from simple model outputs to systems that can take actions, use tools, and follow goals across multiple steps. That shift is bringing new types of roles with it: • Engineers and researchers building agent frameworks • Product teams defining how agents fit into user journeys • Decision engineers designing shared workflows between humans and machines • Governance and compliance leads ensuring safety and alignment ... And this shift isn’t limited to labs or big tech. Thanks to the growth of 𝐨𝐩𝐞𝐧-𝐬𝐨𝐮𝐫𝐜𝐞 𝐀𝐈 𝐭𝐨𝐨𝐥𝐬, agent development is becoming more accessible. And open source frameworks like LangChain are lowering the barrier for experimentation. 📍We also just open-sourced 𝐆𝐞𝐧𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐎𝐒, a lightweight framework we’ve been using internally to run multi-agent systems. If you’re playing around with agent workflows, feel free to check it out: GitHub: https://bit.ly/4kzE1Mt And if you’re into open source, a ⭐ would mean a lot! __________ For more on AI and Data Science, plz check my previous posts. I share my journey here. Join me and let's grow together. Alex Wang #technology #aiagents #agenticai #generativeai

Explore categories