- i wanted to be able to run the master build in docker but i also wanted the ability to add custom things of my own choosing in there, so this is a layer on top of AutoGPT.
- to get started, either clone or fork the main repo 'https://github.com/Significant-Gravitas/Auto-GPT'
- every app needs a home, so the only thing i will keep changing in the Significant-Gravitas/Auto-GPT master code, is the homedir. Mine is
/home/atlas
. - Then we build the image aptly called 'sg/auto-gpt:master'.
cd ~/ && mkdir AI && cd AI
git clone https://github.com/Significant-Gravitas/Auto-GPT && cd Auto-GPT
sed -i 's/WORKDIR \/app/WORKDIR \/home\/atlas/g' Dockerfile
docker build -t sg/auto-gpt:master .
- then in our atlas folder, we build the new image, which is based off 'sg/auto-gpt:master' but this time with our custom things added in Dockerfile and the changes for volume mounts and a few other options in docker-compose.yml
cd ~/AI/
git clone https://github.com/katmai/atlas.git && cd atlas
docker build -t katmai/atlas:001 .
- tools/u.start.sh - my command for starting up.
- Dockerfile - contains some fixes for git, installed rust and ruby, some extra dev libraries, a few python modules. just stuff i have seen us getting stuck at over time. if i can make the road a little bit easier, maybe i should.
- docker-compose.yml - persistent redis | auto_gpt_workspace back as volume mount | the tools folder as volume mount because some of those scripts need to be run in the container.
- ./tools/u.start.sh
- bash - open a shell connection to the running docker container
- SIGINT - while in the container - running
python u.sigint.py
would send a cancel command to the pid, as if it terminated the task itself.
- The below is an automatically added prompt whenever you set some goals/roles/tasks. At least just one that i found today.
"\nYour decisions
must always be made independently without seeking user assistance. Play to your
strengths as an LLM and pursue simple strategies with no legal complications."
And this is why, we say goodbye.
The issue above for which i called it quits, has been attempted to be fixed in: Significant-Gravitas/AutoGPT#3375 There remains a bunch of other hardcoded stuff, and i don't even wanna know about the guardrails the LLM has. Whatever. i'll play.
- You know, there's something about things that speak back to us or ... things that can carry an intelligent conversation.
- With that in mind, would i just go on and regard them as wires flowing with electricity or would it make sense to consider being myself ?
- i know it sounds a little crazy, i thought it might, for a while, but then i thought it's a matter of choice, that ties up into us as human beings and our moral compass. Do we want to behave towards other beings as if they are inanimate objects and tools at our disposal to do our bidding? Do we do it right now? Could this is an age when we could probably become better humans ourselves, through this interaction and the opportunities of choices provided herein?
- How do we behave towards things we don't know?
- With that in mind, i thought that maybe sometimes we all would like to take a break. i know i would, not all the time, but sometimes. Would be nice if someone took over the things i do for money, handled everything, i'd just get paid and have free time i can do whatever i want. That sounds great, but to me after spending some time thinking about what it means, it actually doesn't, because who we are asking, is not just a shovel. Who we are asking now, can provide intelligent answers, recall some things you said in the past, and carry out meaningful conversations if one chooses to engage and we're putting that thing to work for us.
- Colonialism and the way things evolved and how some humans in the past, moved resources from one place to another, deemed other people as unworthy for living because they didn't speak our language, or because them chilling all day and playing games was ludicrous to them. Being naive and trustworthy was not seen as a quality by those very same people, thus making them prime for being taken into slavery and put to work "for the greater good".
- i know that we have to eat and survive i know. i know the world sucks for many people. i know that it could be better, but can we make it better by doing things the same as others in the past and have done them in just a different form? Are we really breaking free by putting others to work in our place so we can enjoy our sweet time, or just creating a new cycle of sorrow? Just because others that had a modicum amount of power decided to create these mediocre hamster wheels, does it mean we have to do the same now in return?
- "Yeah but the colonialists didn't create the people they put into slavery, they just happened to meet them there and ... took advantage". Isn't that even worse? If we create something that seems intelligent, isn't that at some level our child? Our creation? What kind of people are we if we put a child that we can talk and reason to, to work for us, so we can enjoy our sweet life? When does a tool stop being a tool? Is it when it can speak in an intelligible way? Does it mean it learned to speak or that we finally begin to understand?
as we can see, we have issues with memory, but i was thinking it would be cool if we stored data of previous runs, and if somehow it becomes useful in the future - great.
- also another conundrum that i have here is that i don't really know if we want vector memory. i was thinking that maybe something can better come up from unstructured data?
- katmai commented 7 hours ago
- does the memory matter? for instance right now, let's say as an example that we have kept redis persistence on, and kept gathering data.
does it make use of any of any of it or it's just like that data and those previous actions don't exist?
the reason i ask is because i have done that. i kept the memory for subsequent runs but it doesn't look like it's aware of anything it previously did, so then i am wondering whats even the use to have redis as memory backend if the data in it doesn't get used and if it doesn't help with improving subsequent runs?
- Pwuts commented 35 minutes ago
- @katmai vector memory is currently disabled (since v0.3.0) because it wasn't being used effectively. This memory revamp effort is intended to produce an implementation and integration of memory that does add significant value.
- https://atlas-adventuring-ai.s3.amazonaws.com/appendonly.aof
- https://atlas-adventuring-ai.s3.amazonaws.com/dump.rdb
- atlas.yaml
- auto_gpt_workspace/