AGI Program

Charting
a
path
towards
thinking
machines

Artificial
General
Intelligence
will
be
the
most
transformative
technology
in
human
history,
capable
of
improving
all
other
areas
of
investigation
and
endeavor.
Leveraging
AGI
alongside
the
ASTR
token,
we
aim
to
accelerate
progress
and
ensure
alignment
with
the
greater
good
of
humanity.

W
e
b
e
l
i
e
v
e
A
G
I
c
a
n
b
r
i
n
g
i
m
m
e
n
s
e
g
o
o
d
t
o
h
u
m
a
n
i
t
y
,
a
n
d
t
h
a
t
a
v
a
r
i
e
t
y
o
f
a
p
p
r
o
a
c
h
e
s
a
r
e
n
e
c
e
s
s
a
r
y
t
o
e
n
s
u
r
e
i
t
s
s
u
c
c
e
s
s
a
n
d
a
l
i
g
n
m
e
n
t
w
i
t
h
s
o
c
i
e
t
y
'
s
v
a
l
u
e
s
.
T
h
r
o
u
g
h
t
h
e
u
s
e
o
f
t
h
e
A
S
T
R
t
o
k
e
n
,
w
e
a
i
m
t
o
f
o
s
t
e
r
i
n
n
o
v
a
t
i
v
e
s
o
l
u
t
i
o
n
s
t
h
a
t
p
r
i
o
r
i
t
i
z
e
b
o
t
h
p
r
o
g
r
e
s
s
a
n
d
e
t
h
i
c
a
l
c
o
n
s
i
d
e
r
a
t
i
o
n
s
.

Creating AGI that doesn't destroy the things humans care about will require diverse solutions. With that in mind, we promote alternative AI research and support AI initiatives that fall outside the mainstream, leveraging the ASTR token to drive innovation and foster responsible development.

There are many potential targets and complex obstacles on the path to safe and successful AGI. Astentir seeks to complement existing research by supporting alternate pathways and new models. By providing significant resources and leveraging the power of the ASTR token, Astentir offers a home to diverse AI research that would otherwise lack access to this transformative technology.

Obelisk: Astentir’s AGI Laboratory

Obelisk is a team of researchers pursuing an exploratory, neuroscience-informed approach to engineering AGI.

Astentir enables the Obelisk team to focus on basic research and take a long-term view. Obelisk is unconstrained by the need to secure funding, garner profit, or publish results. The team also has access to significant computational resources.

1

How does an agent continuously adapt to a changing environment and incorporate new information?

2

In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards?

3

How does higher level planning arise?

Our approaches are heavily inspired by cognitive science and neuroscience. To measure our progress, we implement reinforcement learning tasks where humans currently outperform state-of-the-art AI.

AGI Safety and ASTR

A pillar of Astentir's philosophy is openness and sharing. That said, we take the risks associated with artificial intelligence very seriously.

We continually measure and assess the risks of harm as our research progresses to ensure that we avoid danger. Beyond that, however, we hope to contribute positively to safety. Some paths to AGI are probably safer than others. Discovering that a particular brain-like AI is much safer or more dangerous than some other alternative could be incredibly valuable if it shifts humanity’s AI progress in a safer direction.

With this in mind, we will carefully consider the safety implications before releasing source code or publishing results.