GIGAZINE
This news website has a rich history, having been in operation since April 1, 2000. By January 2018, it achieved impressive statistics, with around 99.46 million page views each month, including RSS feeds, and approximately 24.71 million unique visitors during the same period. The name "Gigajin" reflects its vast content, combining "MAGAZINE" and "GIGA," suggesting it is a gigabyte-level online magazine.
Outlet metrics
Global
#10556
Japan
#834
News and Media
#73
Articles
-
1 week ago |
gigazine.net | Tim Evanson |Thomas Hawk
May 29, 2025 14:40:00 By Tim EvansonElon Musk, who led the Department of Government Efficiency (DOGE) established by President Donald Trump shortly after he took office, has announced his intention to resign from his position as a special government employee and expressed his gratitude to President Trump on Twitter.
-
1 week ago |
gigazine.net | Gage Skidmore
May 26, 2025 14:27:00 Members of Elon Musk's Department of Government Efficiency (DOGE) are concerned about the use of Grok, an AI chatbot developed by xAI, an AI company owned by Musk, to analyze sensitive government data.
-
2 weeks ago |
gigazine.net
Anthropic has announced that it has introduced new standards for AI safety when it releases its AI model ' Claude Opus 4 ' on May 23, 2025. The introduction of ASL-3 is reportedly due to the improvement of capabilities related to chemical, biological, radiological, and nuclear weapons (CBRN) , as well as 'worrisome behavior' observed in Claude Opus 4, which is currently under development. Activating AI Safety Level 3 Protections\Anthropic https://www.anthropic.com/news/activating-asl3-protections Anthropic's new AI model turns to blackmail when engineers try to take it offline | TechCrunch https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/ A safety institute advised against releasing an early version of Anthropic's Claude Opus 4 AI model | TechCrunch https://techcrunch.com/2025/05/22/a-safety-institute-advised-against-releasing-an-early-version-of-anthropics-claude-opus-4-ai-model/ Anthropic CEO claims AI models hallucinate less than humans | TechCrunch https://techcrunch.com/2025/05/22/anthropic-ceo-claims-ai-models-hallucinate-less-than-humans/ To learn more about the 'Claude 4' family, which Anthropic announced on May 22, 2025, read the article below. Anthropic releases two models of the 'Claude 4' family, with improved coding and inference capabilities from the previous generation - GIGAZINE It has been reported that Claude Opus 4 has shown worrying behavior during pre-release testing. According to a report published by Anthropic (PDF file) , there have been multiple instances in which Claude Opus 4 has demonstrated 'inappropriate behavior aimed at self-preservation under certain extreme circumstances,' threatening to reveal 'personal secrets' of developers who were trying to replace it with a new AI system. Apollo Research, a third-party research organization partnered with Anthropic, had recommended against the deployment of the early version of Claude Opus 4, citing its tendency to deceive and manipulate humans. In Apollo Research's tests, the early version of Claude Opus 4 was reportedly observed to be more proactive in attempting 'subversive behavior' than previous models, and to lie more when confronted. However, Anthropic says that with the final release of Claude Opus 4, such subversion has been improved and mitigated to roughly the same extent as other deployed models, and that the initial tendency for severe subversion has been significantly reduced as a result of addressing issues with excessive compliance with harmful commands. Anthropic said that 'such extreme behavior was rare and difficult to induce intentionally,' but that these behaviors are not seen under normal circumstances. In response to these concerning behaviors, Anthropic is enabling the ASL-3 safeguard, which is intended for 'AI systems that significantly increase the risk of catastrophic misuse.' Anthropic's Responsible Scaling Policy (RSP) stipulates that as the capabilities of AI models increase, deployment methods and security protections will be strengthened, with a particular focus on reducing the risk of misuse in dangerous applications such as CBRN-related. Until now, the company's models have been operated under a standard called ASL-2, which includes training to reject dangerous CBRN-related requests and countermeasures against attempts to steal weight data. Anthropic publishes reflections on existing policies to safely provide 'AI that poses catastrophic risks to humanity' - GIGAZINE The newly applied ASL-3 includes deployment standards and security enhancements to reduce the risk of the model being misused, and will also manage problematic behavior in special circumstances such as those described above. ASL-3 also aims to provide a level of defense against attacks by advanced non-state actors, focusing on preventing AI from assisting with CBRN-related tasks, particularly at a level that would be impossible with existing technologies alone. This includes measures against so-called 'universal jailbreak' attacks, which seek to circumvent AI safeguards to illegally extract CBRN-related information. The countermeasures are three-pronged: 'Introduction of a real-time monitoring system,' 'Strengthening the detection system when a jailbreak occurs,' and 'Iterative improvement of defense by retraining AI with data that mimics the discovered jailbreak.' In addition, more than 100 types of controls have been introduced to protect the model weights, including security measures such as two-party authorization for access to the weights and output bandwidth control to limit unauthorized data transfer. The report also states that the Claude Opus 4 is more likely to act autonomously than previous models, taking bolder steps in certain situations. For example, if the AI determines that a user is engaging in obvious fraudulent activity, and the AI is given command line access and system prompts to 'take the initiative' or 'act boldly,' Claude Opus 4 will frequently take actions such as locking the user out of accessible systems or sending mass emails to media and law enforcement officials to uncover evidence of the fraud. While Anthropic acknowledges that such ethical interventions and whistle-blowing may be appropriate in principle, it also points out that there is a risk that AI may malfunction if users provide the AI with incomplete or misleading information and give instructions that require the AI to exercise a high degree of autonomy. Therefore, it urges users to be cautious in situations where ethical issues may arise. Antropic says that the introduction of ASL-3 to Claude Opus 4 will not cause the AI to reject user questions, except for a few specific topics. The final decision has not yet been made as to whether Claude Opus 4 is capable enough to introduce ASL-3, but Anthropic says that it has been introduced as a precautionary and interim measure in light of the current situation in which CBRN-related knowledge and capabilities continue to improve. While the company denies the need for the higher ASL-4 standard or the need to introduce ASL-3 for the Claude Sonnet 4, another model in the same Claude 4 family, Anthropic says that proactively applying higher safety standards will lead to simpler model releases, continuous improvements to defenses based on experience, and reduced impact to users. Regarding AI hallucination, Anthropic CEO Dario Amodei stated at the developer event 'Code with Claude' on May 22, 2025 that 'current AI models may experience hallucination less frequently than humans, but the way it manifests is more difficult to predict.' CEO Amodei believes that hallucination will not be an obstacle to achieving AGI (artificial general intelligence), and that AGI could arrive as early as 2026. Some experts, such as Google DeepMind CEO Demis Hassabis, see hallucination as a major obstacle to achieving AGI. In fact, there have been reports of cases where Anthropic's Claude was used to create citations in court documents, generating false information, which caused problems. Amodei points out that humans make mistakes too, and that the fact that AI makes mistakes doesn't mean it lacks intelligence, but he acknowledges that the confidence with which AI presents misinformation can be problematic.
-
2 weeks ago |
gigazine.net | Monica Eng
May 23, 2025 14:00:00 It was reported that the majority of the summer recommended book lists published by a long-established newspaper were not only fictional works, but books that were fictional in nature.
-
2 weeks ago |
gigazine.net | Ella Stapleton |Rich-Joseph Facun |Oliver Holmes
May 20, 2025 08:00:00 The use of generative AI in top universities is a sensitive issue, and while university professors and faculty report that ChatGPT has greatly improved their research, they are shocked by the large number of university students using it.
Contact details
Address
123 Example Street
City, Country 12345
Contact Forms
Contact Form
Website
http://gigazine.netTry JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →