-
-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Dynamically scrape Ollama model names #14
Conversation
- Added a function to dynamically scrape Ollama model names from the Ollama website. - The function uses BeautifulSoup and requests to parse the HTML and find the model names. - If the website cannot be reached, a static list of model names is used as a fallback. - The static list of model names is stored in a new file, ollama_model_names.txt. - The function is cached using lru_cache to improve performance. - Updated the create_model function to use the new dynamic function. - Added a new Jupyter notebook to test the scraping function. - Included the new text file in the MANIFEST.in file for distribution. - This change improves the flexibility and maintainability of the code by allowing it to adapt to changes in the Ollama model library.
This commit adds a newline at the end of the v0.0.86 release notes file. This change is in line with the standard file formatting conventions.
This commit separates the 'Commit release notes' step from the 'Write release notes' step in the release-python-package workflow. The 'pre-commit' package installation has been moved to the 'Commit release notes' step.
- Added beautifulsoup4, lxml, and requests to the environment.yml file. These packages are necessary for the automatic scraping of ollama models.
This commit adds the content.code.copy feature to the theme configuration in mkdocs.yaml. This feature allows users to easily copy code snippets from the documentation.
…for model names The method ollama_model_keywords() in model_dispatcher.py has been refactored. The dynamic scraping of model names from the Ollama website has been removed. Instead, the model names are now read from a static text file distributed with the package. This change simplifies the code and removes the dependency on the BeautifulSoup and requests libraries.
This commit introduces a new feature that automatically updates the list of Ollama models. A new Python script has been added to the hooks in the pre-commit configuration file. This script scrapes the Ollama AI library webpage to get the latest model names and writes them to a text file. The dependencies required for this script are now specified in the pre-commit configuration file instead of the environment file.
GitBot Summary of ChangesThe pull request includes several changes:
cc: @ericmjl, please check for correctness! |
allowing it to adapt to changes in the Ollama model library.