AGI Program

Charting a path towards thinking machines

Artificial General Intelligence will be the most transformative technology in human history, capable of improving all other areas of investigation and endeavor. Leveraging AGI alongside the ASTR token, we aim to accelerate progress and ensure alignment with the greater good of humanity.

We believe AGI can bring immense good to humanity, and that a variety of approaches are necessary to ensure its success and alignment with society's values. Through the use of the ASTR token, we aim to foster innovative solutions that prioritize both progress and ethical considerations.

Creating AGI that doesn't destroy the things humans care about will require diverse solutions. With that in mind, we promote alternative AI research and support AI initiatives that fall outside the mainstream, leveraging the ASTR token to drive innovation and foster responsible development.

There are many potential targets and complex obstacles on the path to safe and successful AGI. Astentir seeks to complement existing research by supporting alternate pathways and new models. By providing significant resources and leveraging the power of the ASTR token, Astentir offers a home to diverse AI research that would otherwise lack access to this transformative technology.

Obelisk: Astentir’s AGI Laboratory

Obelisk is a team of researchers pursuing an exploratory, neuroscience-informed approach to engineering AGI.

Astentir enables the Obelisk team to focus on basic research and take a long-term view. Obelisk is unconstrained by the need to secure funding, garner profit, or publish results. The team also has access to significant computational resources.

1

How does an agent continuously adapt to a changing environment and incorporate new information?

2

In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards?

3

How does higher level planning arise?

Our approaches are heavily inspired by cognitive science and neuroscience. To measure our progress, we implement reinforcement learning tasks where humans currently outperform state-of-the-art AI.

AGI Safety and ASTR

A pillar of Astentir's philosophy is openness and sharing. That said, we take the risks associated with artificial intelligence very seriously.

We continually measure and assess the risks of harm as our research progresses to ensure that we avoid danger. Beyond that, however, we hope to contribute positively to safety. Some paths to AGI are probably safer than others. Discovering that a particular brain-like AI is much safer or more dangerous than some other alternative could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.

With this in mind, we will carefully consider the safety implications before releasing source code or publishing results.