三人因利用加密货币洗钱诈骗银行 1,000 万美元而面临 80 多年监禁
2018 年至 2022 年期间,高忠世、徐乃峰和蒋菲奥涉嫌策划了一项从金融机构盗取数百万美元的计划。
DavinIlya Sutskever, a name synonymous with innovative advancements in artificial intelligence, made headlines in the AI community in May 2024.
Having co-founded OpenAI, a research powerhouse dedicated to the ethical development of artificial general intelligence (AGI), Sutskever abruptly departed the organisation he helped create.
His departure was announced on the same day as his ex-colleague, Jan Leike, who was one of the leaders in the “superalignment team”.
Whispers of a rift between Sutskever and OpenAI leadership over the prioritisation of safety in AI development began to circulate.
This dramatic exit wasn't the end of Sutskever's journey, however. Just one month later, he announced his next venture – Safe Superintelligence Inc. (SSI).
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
This new company marked a bold departure from OpenAI, with a laser focus on building a superintelligence that prioritises safety above all else.
Ilya Sutskever is an Israeli-Canadian computer scientist who has made significant contributions to the field of artificial intelligence, particularly in deep learning.
He is most well-known for co-inventing AlexNet, a convolutional neural network that achieved groundbreaking results in the 2012 ImageNet competition and helped propel deep learning into the mainstream.
Sutskever was also a co-founder and former chief scientist at OpenAI, a research company dedicated to developing safe artificial general intelligence.
While at OpenAI, he played a leading role in the development of the GPT series of large language models.
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
In June 2024, Sutskever co-founded Safe Superintelligence Inc., where he serves as Chief Scientist, aiming to focus solely on creating a safe and beneficial superintelligence.
Sutskever's departure from OpenAI stemmed from a fundamental disagreement about AI research priorities.
Sutskever (right) with Sam Altman, OpenAI CEO (left).
He believed OpenAI was prioritising rapid advancement in capabilities over ensuring the safety of increasingly powerful AI systems, a concern highlighted by the release of the potentially risky ChatGPT 4.0 language model.
Sutskever, along with other safety researchers, felt that robust safety protocols were crucial to develop alongside advancements.
This misalignment with OpenAI's leadership, focused on "shiny products" according to Sutskever, ultimately led him to establish Safe Superintelligence Inc. (SSI) to focus solely on safe AI development.
SSI's mission statement is refreshingly clear and concise – to develop a safe superintelligence.
This singular focus permeates every aspect of the company's structure and operation.
Unlike traditional tech companies with multiple product lines and commercial pressures, SSI operates with a streamlined approach.
Management overhead and product cycles are minimised, ensuring that resources and focus remain firmly on the core objective – building a safe superintelligence.
Additionally, despite having a newly created X account, SSI's follower count surged to over 68,400 within just two weeks of their first post.
This rapid growth reflects the high level of anticipation and interest surrounding the project.
Sutskever envisioned SSI as a revolutionary entity, one unlike any AI research lab before it.
Here are the cornerstones of SSI's approach:
One might say SSI isn't merely a company; it's a mission statement come to life.
The company's entire identity revolves around its core objective – building safe superintelligence. This translates to a streamlined operation, free from the distractions of product cycles or profit margins.
Every decision and resource allocation is meticulously directed towards achieving their paramount goal.
At the heart of SSI's philosophy lies the notion that safety and capability are not mutually exclusive.
They envision a future where advancements in AI capabilities are accompanied by ironclad safety measures, developed in tandem.
This ensures that superintelligence doesn't become an uncontrollable force but a powerful tool wielded for good.
Recognising the immense challenge they face, SSI isn't seeking to build an army of researchers.
Instead, they're meticulously assembling a select group of the world's most brilliant minds – a "lean, cracked" team, as Sutskever himself described them.
This elite group will focus solely on the development of safe superintelligence, fostering a collaborative environment where the best ideas can flourish.
SSI understands that geographical location plays a crucial role in attracting top talent.
They've strategically established offices in Palo Alto and Tel Aviv, both hubs brimming with cutting-edge research and a deep pool of qualified engineers and researchers.
But why Tel Vivi?
A X user shared a possible reason for SSI being located there.
Ilya 的新公司 SSI 大家漏掉一个细节,他们的办公室除了在硅谷以外,还在以色列的特拉维夫市。因为 Ilya 和 Daniel Gross 都在以色列的耶路撒冷度过了儿童时期,同时以色列的人才密度也是他们所看重的。 pic.twitter.com/dXLtdqMlvg
— Glowin (@glow1n) June 21, 2024
Translation:
Everyone has overlooked a detail about Ilya's new company SSI: their office is not only in Silicon Valley, but also in Tel Aviv, Israel. This is because Ilya and Daniel Gross both spent their childhood in Jerusalem, Israel, and they also value Israel's talent density.
SSI recognises that the pursuit of safe superintelligence is a marathon, not a sprint.
Their business model is designed to insulate them from the short-term pressures of commercialisation.
This allows them to focus on long-term research and development, free from the constraints of quarterly profits.
Sutskever isn't alone in this ambitious venture. He is joined by two accomplished figures in the AI landscape – Daniel Gross and Daniel Levy.
A veteran of the AI world, Gross brings a wealth of experience to SSI. Prior to co-founding and serving as CEO of SSI, Gross held the prestigious position of AI lead at Apple.
His journey began in Jerusalem, Israel, where he was born in 1991.
In 2010, Gross made headlines by becoming the youngest founder accepted into the Y Combinator program, launching Greplin (later renamed Cue), a pioneering search engine for consolidating online accounts.
Recognised for his entrepreneurial prowess, Gross was named one of Forbes' "30 Under 30" in Technology in 2011 and Business Insider's "25 under 25" in Silicon Valley in 2011.
His success continued with Cue's acquisition by Apple in 2013.
Following this, Gross joined Y Combinator as a partner, focusing on AI and launching the "YC AI" program in 2017. In 2018, he founded Pioneer, an early-stage startup accelerator and fund.
Gross's deep understanding of AI, coupled with his entrepreneurial track record, positions him as a pivotal figure in shaping SSI's strategic direction.
His insights will be critical as SSI navigates the complexities of AI safety and development.
Investors can be confident in Gross’s ability to secure funding, given his proven success in attracting capital for groundbreaking research initiatives like the AI Grant and Andromeda Cluster.
It is a great pleasure and honor to cofound this new endeavor with @ilyasut and @daniellevy__: https://t.co/Wd9V5BP3Rn
— Daniel Gross (@danielgross) June 19, 2024
Levy's reputation as a leading AI researcher precedes him.
His expertise in training large AI models, honed during his tenure at OpenAI, makes him an invaluable asset to SSI.
As both co-founder and Principal Scientist, Levy's technical prowess extends beyond his credentials.
His experience working alongside Sutskever at OpenAI ensures a seamless collaboration as they pursue this revolutionary project.
Levy's role reflects SSI's unwavering commitment to pushing the boundaries of what's possible in AI safety and capability.
Beyond excited to be starting this company with Ilya and DG! I can't imagine working on anything else at this point in human history. If you feel the same and want to work in a small, cracked, high-trust team that will produce miracles, please reach out. https://t.co/Hm0qutNoP8
— Daniel Levy (@daniellevy__) June 19, 2024
SSI's mission has the potential to redefine the AI sector in several ways.
Firstly, by prioritising safety, SSI sets a new standard for responsible AI development.
Their success could encourage other companies to adopt similar safety-first approaches.
Secondly, SSI's breakthroughs in safety protocols could be applicable to a wide range of AI systems, not just superintelligence.
This could lead to significant advancements in the overall safety and trustworthiness of AI technology.
Despite its ambitious goals, SSI faces several challenges.
Critics argue that developing superintelligence itself is fraught with technical difficulty, and integrating robust safety measures further complicates the process.
The concurrent development of both capabilities and safety mechanisms might be overly optimistic and difficult to achieve within projected timelines.
Additionally, some argue that SSI's singular focus on safety might limit its ability to adapt to the ever-changing dynamics of the AI market.
Focusing solely on superintelligence development could restrict SSI's ability to respond to emerging trends or unforeseen obstacles.
Furthermore, there's a potential risk associated with relying on a small, elite team.
If key members leave or fail to deliver, the concentration of knowledge and expertise within the group could become a vulnerability.
As of today, 8 July 2024, Safe Superintelligence Inc. (SSI) hasn't disclosed any information about their funding or who their backers are.
There has been speculation about potential investors based on the company's founders' backgrounds, but nothing confirmed.
However, SSI itself has chosen to remain tight-lipped about their financial situation.
The quest to achieve safe superintelligence is an audacious undertaking, one fraught with technical hurdles and philosophical quandaries.
SSI, with its laser focus and "lean, cracked" team, embodies a daring approach to this challenge.
Their success, if achieved, could usher in a new era of AI development, prioritising safety and setting a high bar for responsible research.
However, the road ahead is strewn with uncertainties.
Can a small team effectively navigate the complexities of superintelligence and safety?
Will their singular focus limit their ability to adapt in this rapidly evolving field?
SSI's journey will be closely watched, with the potential to redefine the future of AI and its impact on humanity.
Superintelligence is within reach.
— SSI Inc. (@ssi) June 19, 2024
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
2018 年至 2022 年期间,高忠世、徐乃峰和蒋菲奥涉嫌策划了一项从金融机构盗取数百万美元的计划。
DavinWormhole 曾经隶属于 Jump Crypto(Jump Trading 旗下的数字资产部门),但在变幻莫测的加密货币环境中,随着母公司的裁员,Wormhole 已成为一个独立实体。
Davin万事达卡宣布与专注于利用人工智能打击网络金融诈骗和洗钱的技术平台 Feedzai 合作。
Joy由导演手冢诚领导的手冢制作公司(Tezuka Productions)为庆祝《黑杰克》问世 50 周年,推出了人工智能驱动的新篇章。这一集共 32 页,是手冢治虫遗产的见证,巧妙地捕捉了人类的情感。手冢治虫对人工智能模仿其父亲的独特风格表示欣喜。
JoyEthBoy NFT 转售:售出 40 万美元,超过原值,成为 NFT 发展史上的里程碑
Jixu这笔全现金交易的财务细节尚未披露,它标志着CoinDesk所有权的转移,此前CoinDesk在2016年被数字货币集团(Digital Currency Group)以50万美元收购。
DavinKraken 因违反监管规定和基金管理不善而面临美国证券交易委员会的指控,这加剧了加密货币平台与监管机构之间的冲突。
Hui Xin育碧加入了包括苹果和 IBM 在内的越来越多大型企业的行列,它们因担心 Twitter 无法遏制反犹太主义内容而暂停在 Twitter 上投放广告,这反映了整个行业对这家社交媒体巨头的广泛抵制。
Jasper总部位于纽约、专注于加密货币的风险投资公司 CoinFund 已将目光投向亚洲市场。此举正值美国监管措施收紧之际,促使多家加密货币企业在亚洲市场寻找机会。
Joy这些资金还与 "杀猪 "骗局有关,根据联邦调查局的数据,2022 年,该骗局让美国公民损失了 33 亿美元。
Davin