
Articles
-
2 weeks ago |
cancertodaymag.org | Stephen Ornes
DARRELL WILSON HAS RECEIVED nearly every type of available treatment for metastatic prostate cancer. Since his diagnosis in 2009, the 74-year-old veteran and former banker, who lives in Boise, Idaho, has undergone chemotherapy, multiple types of hormone-deprivation therapy, immunotherapy and a kind of radiation called brachytherapy that uses tiny radioactive seeds implanted directly in the prostate.
-
2 weeks ago |
quantamagazine.org | Stephen Ornes
In the past, researchers have tried to improve on Shor’s algorithm for factoring by simulating a qubit using a continuous system, with its expanded set of possible values. But even if your system computes with continuous qubits, it will still need a lot of them to factor numbers, and it won’t necessarily go any faster. “We were wondering whether there’s a better way of using continuous variable systems,” König said. They decided to go back to basics.
-
2 weeks ago |
flipboard.com | Stephen Ornes
3 hours agoIonQ (NYSE:IONQ) stock gained on Monday premarket after it announced that it has agreed to acquire Oxford Ionics for $1.075 billion, which will consist of $1.065 billion in shares of IonQ common stock and approximately $10 million in cash. The transaction will combine IonQ’s quantum computing, …
-
2 weeks ago |
buff.ly | Stephen Ornes |Eric James Beyer |Matt von Hippel |Ben Brubaker
Quantum computers still can’t do much. Almost every time researchers have found something the high-tech machines should one day excel at, a classical algorithm comes along that can do it just as well on a regular computer. One notable exception? Taking apart numbers. In 1994, the mathematician Peter Shor devised an algorithm that would let quantum computers factor big numbers exponentially faster than classical machines.
-
3 weeks ago |
wired.jp | Stephen Ornes
AIモデルのなかでも大規模言語モデル(LLM:large language model)はうまく機能する。なぜなら極めて規模が大きいだからだ。OpenAI、メタ・プラットフォームズ、そしてDeepSeekの提供する最新のAIモデルでは、数千億の「パラメーター(データ同士のつながり具合を決める調整つまみのようなもので、学習過程においてはそれらが微調整されていく)」が使われている。 パラメーターが多いと、AIモデルはよりよくパターンや関係性を認識できるようになる。それによって、より強力でより正確になるというわけだ。 LLMはエネルギーの大食漢 だがその力を獲得するためには費用がかかる。数千億のパラメーターを備えたAIモデルの学習には、莫大な計算資源が必要となるのだ。例えばFemini 1.0 UltraというAIモデルの学習過程において、グーグルは1億9,100万ドル(約280億円)を費やしたとされる。 大規模言語モデルはまた、ひとつのタスクをこなすたびに相当に大きな計算能力を必要とする。そのことから、LLMはエネルギーの大食漢として悪名高い。電力研究所(Electric Power...
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →X (formerly Twitter)
- Followers
- 913
- Tweets
- 714
- DMs Open
- Yes