This software enables you to respond to job-related questions using your personal and professional information. It can be seamlessly integrated with external applications, such as bots, to automate responses in job application forms.
The workflow is the following:
- First, a specialised classifier LLM tag the input question in order to know which section/s from our CV we need to focus on.
- When the tags are defined, we extract from our resume the information of those sections and provide them to another LLM, in this case an expert QA for job-related question model.
- With this information, this model infers the answer of the input question based on the context added in the prompt.
If you want to make your own testings, I provided a test directory where you can simulate template questions in LangSmith and see how accurated is the app for your use case. In my case, it retrieves more than 90% of accuracy :)
Create Your Profile: Go to the "resume" folder in the repository and create an info.md file. Include the following headlines that summaries your professional profile. The better you detail them, the better responses the model will provide. (Avoid personal details like ID number, residencial address, at least, you are aware 100% about the data managment of the server where the LLM is running).
Use the following sections exclusively:
- ABOUT MY PROFILE: It is like a brief about how you are, what you do, where you live, and general things regarding you...
- TECHNICAL SKILLS: Here you provide your skills or tools and years of experience with each one.
- JOB PREFERENCES: Any preferencies like salary, remote/on-site preferences, part time/ full time and so on...
- JOB EXPERIENCES: A detail of our your work records and what did you do.
- PROJECTS: Provide professional and academic projects (from job or personal ones).
- EDUCATION: Your background and education.
- CERTIFICATIONS: Here provide a detail of your courses you made and also some certifications you may have.
I will also share some videos to demonstrate how it works: in this demo, I use gpt-4o-mini as router and llama 3 70b as qa bot.
Suppose I am applying for an AI Developer role. When we invoke the chain, we set the role and then in the input question in this case if I have experience with Python. (This is hardcoded for explanation purposes, you can parameterize this later) Look that the output also is parsed as an integer because the model detects that the question should be a number, not a string.
demo.mp4
Also I tried with other topics and well this is the result: