Skip to content

A list of possible decentralised AGI scenarios, seeking for the best version

Notifications You must be signed in to change notification settings

eliaspfeffer/decentralisedAGI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 

Repository files navigation

decentralisedAGI

A list of possible decentralised AGI scenarios, seeking for the best version

Goal

This UML-diagram repo seeks for the best long-term outcome how AGI could be implemented meanwhhile guaranteeing safety. As there are many possebilities to do this, more ideas are needed.

How to contribute:

1.) Check, if your idea already got uploaded

2.) If not: Upload your sketch or UML diagram idea here. For reference: see existing examples.

Things to keep in mind:

1.) AGI could get manipulated by big players (other AIs / AGIs / companies / countries) and so needs a protection mechanism against it (similar to 51% attack on Bitcoin. Bitcoins Solution: Hashrate/Difficulty=10 minutes per Block).

2.) Its not certain, that AGI will need or not need a lot of energy.

3.) It could be fatal to centralise its database.

4.) AGI will be much smarter than any human ever created.

5.) AGI is not limited by a timespan of existance.

6.) AGI always needs to consider being absolutely wrong with what it thinks is right.

7.) TRUTH is not a boolean value but a probability.

8.) Math is incomplete (Gödel's incompleteness theorems)

9.) AGI safety is like a bicycle. We don't know what the bicycle looks like yet, but we still want to make it maximum safe. This can't be done - but the circumstances around the bicycle can be made in a way so that the road where the bicycle will drive on is maximum safe, even tho we dont know what the bicycle looks like yet. So AGI is still a black box and we need to make the interface maximum safe.

Important philosophical points to consider:

1.) All humans and machines are full of potential errors in there doings and believs and the errors can only be excluded, if you take the average of what all humans would do.

2.) Collective action problem (see wikipedia https://en.wikipedia.org/wiki/Collective_action_problem)

Axioms to consider:

1.) Over time, everything will become better, cheaper and more efficient.

2.) The biggest danger in the world is not evilness, but stupidity (therefore this AGI solution needs access to maximum wisdom)

Solution 1:

3 Players exist: AGI itself, Nodes, Payer.

Payer pays AGI network to do ..., this gets checked by AGI Firewall, if passed: Nodes work to get the job done. Nodes get payed.

image

###Open questions: ###List of what Nodes could do:

  • Provide their values
  • Provide wisdom And/Or possibilities:
    • Average wisdom = higher Truth probability
    • Truth must be decided by logic
    • Good nodes / Bad nodes get rewarded/punished

###List of scenarios how to avoid 51% attacks: -> Node gets payed for…

  • providing unique knowledge
  • when its specifically knowledge got used for a case -(might incentivise to copy bad knowledge)

-> Payments could get distributed depending on…

  • How high in priority a Nodes knowledge for reasoning has been.
  • The amount of work put into finishing a job

About

A list of possible decentralised AGI scenarios, seeking for the best version

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published