Skip to content

SteampunkCircus/sd-mesh-gear

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

sd-mesh-gear

Notice: I made a short tutorial of 60 seconds but for some reason the exporting of that video isn't working from my video editor online service that I'm using. I'll update this if that happens. It's a cool video. No talking. Just cool stuff. In 60 seconds.

The files from that YouTube video and how I made them.

This is not the tutorial video. This is what caused the tutorial video. YouTube video

So, I couldn't just post those mp4s and not share how to do it. This stuff is super cool! This is how I share.

Follow the links below and read up a bit from the original source. That's an important part. You can run the ashawkey implementation in colab. Colab is making lots of changes to available machines lately and it seems to be going well. It'll cost a bit to run it. If I recall correctly, it finished in about 25 minutes or so on an A100 with 40GeeBees. Colab will give you an estimate now which is really nice.

The output will be an mp4 by default but you can also output the .obj file. That's the mesh. Now go get the super Blender and open the mesh file. That's it. That's pretty much how I made that strange video sans all the exploratory parts. Explore for yourself. Have fun. Share what you do with the world and subscribe to my YouTube channel. It's nice ;)

Amazing Sciencey Bits ( Start HERE )

Now that you know what the button is going to do it's time to click it. Or tap it. Whatever. Do that below.

Implementations/Colabs/Etc.

A huge thanks to ashawkey! Go thank that person.

Make stuff awesome.

Acknowledgement

( I copied this section from ashawkey/stable-dreamfusion. I find it prudent to do so. )

  • The amazing original work: DreamFusion: Text-to-3D using 2D Diffusion.

    @article{poole2022dreamfusion,
        author = {Poole, Ben and Jain, Ajay and Barron, Jonathan T. and Mildenhall, Ben},
        title = {DreamFusion: Text-to-3D using 2D Diffusion},
        journal = {arXiv},
        year = {2022},
    }
    
  • Huge thanks to the Stable Diffusion and the diffusers library.

    @misc{rombach2021highresolution,
        title={High-Resolution Image Synthesis with Latent Diffusion Models}, 
        author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
        year={2021},
        eprint={2112.10752},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
    }
    
    @misc{von-platen-etal-2022-diffusers,
        author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
        title = {Diffusers: State-of-the-art diffusion models},
        year = {2022},
        publisher = {GitHub},
        journal = {GitHub repository},
        howpublished = {\url{https://github.com/huggingface/diffusers}}
    }
    

I have a dream that one day the perfect solution to trackpoint drift will be revealed and NO that does NOT include taking my finger off the nub. It's too damn comfortable a position. I think the trackpoint has not lived up to it's potential yet. Someone should do something. Perhaps that person is you. Just saying. Imagine all the tiny finger movements you could save all over the world! Someone that isn't me should consider doing the world a solid. Anyhow... ML project or Kaggle contest anyone? Someone fix it. You can do it!

About

The files from that YouTube video.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published