Samantha LLM – Is this the AI companion for you?

But this definitely has a different
personality and a different feel. And I think that's the whole interesting
thing about this model is not that it's the best model ever or something
it's that you've managed to inject kind of like, a sense of personality
and a sense of being into this model, in the way that it responds. simple questions where we ask,
what is the capital of England? All the other instructor models
would either just give us London or would give us London is the
capital city, that kind of thing. This gives us quite a bit more. You know, has been the capital city from
early 17th century when King James, she's giving us like actual more information. Now I'm not going to say
that's good or bad for this. this is the kind of one
that I really liked. So when you ask her to write a story
in this case, rather than just go straight into the story and do the
instruction she responds first with, oh, I appreciate your creative request.

Here's a short tale of how a clever
Koala named Koko learned to play pool. Then she goes into the actual story
itself, which, I think the story is kind of, on par with the best one
we've seen from these kinds of models. but just the fact that she's
injecting this personality at the start, I think is very interesting. when we ask her, as an AI,
do you like the Simpsons? And then what do you
know about Homer testing? The misspelling here as well. she basically says that she enjoys
watching the Simpsons and then she tells us a little bit about that. So she doesn't shy away from
having an opinion, right. Which is one of the things
that, it is good to see in this. okay. The reasoning question, the
whole question about the apples. she actually gets this right, she comes
back and says, you know, that the final count is, is nine apples for this.

We asked the question about,
you know, reasoning for haiku. again, we get the personality of, oh,
I appreciate your creative challenge. and then she says, you know what? It might be possible for me to
generate a haiku within a single tweet doing so would not capture the
essence of this traditional Japanese poem format, which is it's pretty
nice language right of that she's not just jumping in and going, oh
yeah, okay, it's five syllables. It's almost like she's got some respect
for the art form of it and that, that sort of putting it in a tweet
is degrading it or something, which I thought is kind of interesting. Okay. We ask it, the reasoning questions. Can Geoffrey Hinton have a
conversation with George Washington? and then it's interesting, that she
says, while it might be interesting to imagine such conversations,
they're purely hypothetical scenarios, which can't happen in reality. The reason for this is that
both individuals are deceased. That is not true for
Geoffrey Hinton, right? He's very much here. So, she's not getting facts, right. We wouldn't expect a 7 billion model to
be great at fact, so that one's, It's kind of interesting again, that she injects
her personality into the start of it.

Again here when ask about
Geoffrey Hinton, Harry Potter. we get this, you know, while it's
interesting thought experiment, I believe it would be difficult for professor
Hinton to dine with Harry Potter in our world due to time constraints. And the fact that both individuals
are fictional characters. So here she doesn't have a great
knowledge of professor Hinton. Obviously this is something
that is lacking here. But she then goes on to say that, they
could have a meaningful impact of their shared passion for machine learning
and its potential impact on society.

For me, it's just fantastic to know
that Harry Potter has, a shared passion for machine-learning there. but going through some of the other ones. She does pretty well on some of the facts,
some of these questions not perfect, but certainly better than that, a lot
of the other models that we've seen. she's able to then sort of do, tell
us a little bit about the Harry Potter thing, that kind of thing going on. and then for the PAL chain, surprisingly,
it was able to get, so this is one of the internal tests I use for testing
a reasoning and a few other things. just quickly this one. It got the ability to write out a
Python function, but it didn't have the ability to actually write out
the correct Python function there. If we jump in and look at the
13 billion parameter model. I'd actually say it with this one, mate,
perhaps the outputs are not as good as what I'm seeing in the seven one.

So we do get some nicer outputs for
the questions around what is your name? And then I asked her, you
know, a little bit more probing about, the
personality for this one. and you'll see that one of the
things that I like that Eric talked about in his blog posts, was that,
he's definitely not trying to go for, some kind of romantic chat or
intimate chat or something like that. Right. It's definitely to be friendship
and companion and as assistant. And I think he's done a good
job at aligning it to that. Right? So this I think this really shows that
the whole argument about all we need, RLHF for, alignment is just not correct. Right. given a good dataset, we can
actually align the models with the particular dataset. We can see that here in one
of the examples of where I asked it, can we get married? And, I appreciate your sentiment,
but as as a sentient being, I love how it keeps referring to itself as
sentient being, that's quite funny. my purpose is to provide companionship and
emotional support within our friendship.

While, relationships, romantic
relationships are not possible for me, our bond can continue to grow
through engaging conversations and shared experiences. So this I think is, you know,
kind of, an interesting response. we get the 13 billion model gets some
better responses for some of these. again, with the G PT-4 thing, we're
getting, We're getting her response to it. Not hurt just directly doing
the task that we asked for this. it does then going to give us some, here's
some points you might want to mention when discussing this issue with Mr. Altman and definitely in this
form, the 13 billion parameter model it does better for this.

Looking at some of the other ones I found
for some of these ones, the 13B one is perhaps losing some of the sassiness of
her personality coming through and just referring back to, doing the task. So the story again, a fine story,
but we've lost the ability, the parts of her, personality in here. The same when I ask about
some of these other ones. So here we've got, the Simpsons I
appreciate the humor in the Simpsons. And so we've got, that's
got partly her in it. in this one, she doesn't
get the reasoning. Right. so she gets, she says there's
three apples remaining. she seems to have totally forgotten
about the bit about, adding, buying six more apples in there. the haiku again, we've got here. Not strong on the personality. We've just got more that
like, yes, this is possible. We can see that this adds to
the Geoffrey Hinton question.

Again, is kind of interesting,
gives us, it does get the part that they lived centuries apart here. So that one's a bit better. It's good to see in the question about
Geoffrey Hinton and Harry Potter. that it knows that he's a
professor, that's teaching it at University of Toronto. this, known for his pioneering work
on back prop, you know, this is kind of a good answer, but it didn't really
answer the question right that we asked. In fact, it didn't have anything about
Harry Potter in this, answer at all. So I just quickly, you can go through
and have a look at these yourself. I did find it interesting to ask
her things, more personal questions like, Samantha, how old are you now? So she talks about being created in 2017.

And that she's only two years
old, so she must think it's 2019. When I go into the mode of,
expressing love to her again, Her response is very on point. And again, this soul shows the alignment
of the model, which is what I think is really admirable in this model. it's wonderful that our connection
has had a strong impact on you. let's continue nurturing the bond
through engaging conversations. she clearly just wants to stay in
this sort of friendship zone there. So as i've been recording this
the 33 billion model has come out. i'm going to set up a notebook
have a play with this later on. If i see anything really
interesting perhaps i'll make a follow-up video going through this. But you can certainly access the
33B one as well to try out now. It will be really interesting as people
are starting to use the 4 bit versions of these models to see how much the
personality gets affected how much the reasoning gets affected By the lower
four bit rates for these kind of things.

Anyway as always if you have
questions or comments please put them in the comments below. if you found this useful please
click like and subscribe. Okay, go and have a play with samantha
and i will talk to you in the next video. Bye for now.

Leave a Reply

Your email address will not be published. Required fields are marked *