Articles

  • Sep 25, 2024 | cxotoday.com | Avivah Litan

    By Avivah LitanAI agents operate autonomously, semi autonomously, or within multiagent systems, leveraging artificial intelligence to perceive, decide, act, and achieve goals in both digital and physical environments for various use cases (see Innovation Insight: AI Agents). Enterprises are already integrating or customizing products with AI agent capabilities, such as Microsoft Copilot Studio, Azure AI Studio, AWS Bedrock, and Google NotebookLM.

  • Sep 23, 2024 | computerweekly.com | Avivah Litan

    Laurent - stock.adobe.comAgents represent a step-change in the use of artificial intelligence in the enterprise - as attendees at Salesforce's annual conference saw first hand this month - but do not come without their risks.

  • Sep 27, 2023 | bcs.org | Avivah Litan

    In the dynamic landscape of artificial intelligence (AI), the surge in interest surrounding generative AI tools like ChatGPT and innovations like Microsoft 365 Copilot is palpable. While these technologies promise transformative potential, they also raise important concerns about safeguarding sensitive data. CIOs and IT leaders should look to employ the following steps to mitigate sensitive data risks associated with ChatGPT. 1.

  • Jul 7, 2023 | ryt9.com | Avivah Litan |VP Analyst

    By Avivah Litan, VP Analyst at Gartner As generative AI innovation continues at a breakneck pace, concerns around security and risk have become increasingly prominent. Some lawmakers have requested new rules and regulations for AI tools, while some technology and business leaders have suggested a pause on training of AI systems to assess their safety. Generative AI isn't going away The reality is that generative AI development is not stopping.

  • Jun 30, 2023 | blogs.gartner.com | Avivah Litan

    GenAI and LLM Hallucinations are a major problem. Below we describe what they are and the emerging solutions that can minimize them. What is an AI Hallucination? Response: Hallucinations are completely fabricated outputs from Large Language models.   Even though they represent completely made up facts, the LLM output presents them with confidence and authority. What are the dangers of AI hallucination?

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →