A very common use case nowadays of large language models is to go through a large amount of complex documents to evaluate if they would be a good fit for a company. For example, we could create this pipeline that users can run to check if a resume would be a good fit for a job spec. This could be run manually or scheduled to process resumes in batch.
So for this demo, I've prepared a job spec and two resumes. The one resume is related to the job spec, while the other is actually not a good fit. If we have a look at the job spec, this role is for a front-end developer with React and TypeScript experience.
If we have a look at Jane's resume, we can see that she's got over eight years of experience as a front-end developer, and with inner skills, she does indeed have experience with React and TypeScript. Whereas, if we look at John's resume, John doesn't seem to have any React experience, and he's got experience with C++, C, and assembly. So John would not be a good fit for this position.
So within this resume evaluator, we can upload any resume. Let's actually go ahead and select John's resume, submit this for evaluation, and if we look at the output, we can see all the criteria that we asked the LLM to evaluate this resume against, like the technical skills, the years of experience, education, industry-specific knowledge, etc. And for each of these criteria, we also get a score.
And we can see that John would not actually be a good fit based on the technical skills match. Within this report, we also see an overall recommendation from the LLM, which we can use to decide if this candidate would be a good fit or not. Let's try uploading Jane's resume.
Let's run this report again, and looking at this report, we can see that Jane would indeed be a very good fit based on the technical skills match; her years of experience look good as well, as well as her educational background. Her industry-specific knowledge seems good as well, as well as her experience, and based on the overall recommendation, Jane seems to be a very good match for our company. You will be able to use this exact same pipeline to evaluate other things, like offers, RFPs, bids, and much, much more.
So let's begin. To create this pipeline, we will be using Vectorshift. So go over to vectorshift.
ai and go ahead and log into your account or click on Get Started to create a new account. We will be using the free tier, so this will not cost you anything. From this dashboard, click on Pipelines and click on New to create a new pipeline.
Then let's click on Create Pipeline from scratch. Let's start by giving our pipeline a different name, like ResumeEvaluator. If you're new to Vectorshift, then you might want to check out some of my fundamental videos, which I will link in the description of this video.
But don't worry if this is your first time, though. This is actually a super simple platform to use. So let's get started by adding an input node to the canvas.
Now for this application, we are not expecting the user to pass text into the prompt. Instead, we want the user to upload a file. So let's change the input type from text to file.
Now we want to pass our file over to the LLM. So let's go to LLMs, let's add the OpenAI node to the canvas, and let's change the model to GPT-4-0. I'll actually go with this 2020-2040806 model.
One thing I like about Vectorshift is you don't have to worry about using OpenAI API keys. Vectorshift allows you to use their credits. Of course, if you wanted to, you could check this box to use your own personal API key and then paste in your OpenAI API key.
But I'll just go with whatever credit Vectorshift provides. Now let's have a look at the system prompt for this LLM. We will use the system prompt to tell the model exactly which criteria needs to be evaluated for the resume and what type of results we want to get back.
So in the description of this video, you will find the exact prompt that I used, but in a nutshell, I'm simply telling the model that its job is to evaluate resumes to see if they would be a good fit based on a job spec. So I'm just telling it to carefully review the resume and the job spec provided. As a reminder, the user will be passing in the resume using this input node.
We will have a look at attaching the job spec in a minute. Then we'll compare the candidate's qualifications against the following criteria. And then I'm just listing a bunch of criteria.
Then I'm also instructing the model to provide a score from 1 to 10 for each of the criteria points. But again, you can simply copy this prompt from the description and modify it for your specific use case. So in the prompt, we need to attach both the resume and the job spec.
We can do that by creating variables. Let's start with the resume. First, I'll simply enter some text like resume content, and below that, I'm going to create a new variable.
And let's call this variable resume. You will now notice that we have a resume input on this node. So we can actually grab the input node and attach it to the resume input on the LLM.
Let's do the same for the job spec. I'll simply write job spec, and below that, let's create another variable. Let's call this job spec like so.
Now we just have to pass the job spec into this LLM. You basically have two options for this. Either you can attach a.
. . Knowledge base, but I do think that a knowledge base loader will actually be overkill for this example, as knowledge bases allow you to upload several documents, which you might want in this case.
Since I'm uploading a single file, I'm actually just going to go to data loaders. Let's add the file node. Let's upload the job spec like so, and let's attach the file node to the LLM.
Great. Our LLM now has access to the resume that's uploaded by the user, as well as the job spec. Now all we have to do is return the result from the LLM back to the user.
Let's go to general. Let's add the output node. Let's pass the response of the LLM to the output node.
We will create a form that you can share publicly with your team members in a minute, but first let's test this pipeline by clicking on run pipeline. Then let's click on upload file. Let's select Jane's resume, click on run, and we do get a response back.
So before we move on, I do want to show you one possible improvement that you can make to your application. If you feel that the results are not accurate at all based on your specific use case, then I highly recommend going to the file node, clicking on settings, and trying to increase the chunk size. The default value of 400 might be too small for most projects, so I do recommend bumping up this value to about 1000 or 2000 characters, with a chunk overlap of about 200 characters.
Now let's go ahead and deploy this pipeline so that we can share it with our colleagues. Let's click on deploy changes. Then let's click on export pipeline.
We want to share this as a form, so let's click on form. Let's give our form a name like resume evaluator. Let's click on create form.
On the right-hand side, we can see a preview of what the form would look like, and VictorShift allows us to customize all of this. We can change the organization logo, we can change the name. In fact, this changes to resume evaluator.
For the description, let's enter something like, "Check if the candidate would be a good fit for the role. " We can also change the input label. So let's change this to resume, and let's change the output label to evaluation results, and that should be it.
Next, click on deploy changes, then click on export, and now we can simply copy this URL, and if you open this form in the browser, it should look something like this. Let's try uploading John's resume, click on submit, and now we have a super efficient way of evaluating resumes. If you enjoyed this video, then please hit the like button, subscribe to my channel, and check out my other VictorShift videos over here.
I'll see you in the next one. Bye bye.