You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: 03-using-generative-ai-responsibly/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Using Generative AI Responsibly
2
2
3
-
[]()
3
+
[]()
4
4
5
5
> **Video Coming Soon**
6
6
@@ -44,7 +44,7 @@ Let's take for example we build a feature for our startup that allows students t
44
44
45
45
The model produces a response like the one below:
46
46
47
-
_11zon.webp?WT.mc_id=academic-105485-koreyst)
47
+

How you write your prompt to the LLM matters, a carefully crafted prompt can achieve achieve a better result than one that isn't. But what even are these concepts, prompt, prompt engineering and how do I improve what I send to the LLM? Questions like these are what this chapter and the upcoming chapter are looking to answer.
@@ -77,7 +77,7 @@ An LLM sees prompts as a _sequence of tokens_ where different models (or version
77
77
78
78
To get an intuition for how tokenization works, try tools like the [OpenAI Tokenizer](https://platform.openai.com/tokenizer?WT.mc_id=academic-105485-koreyst) shown below. Copy in your prompt - and see how that gets converted into tokens, paying attention to how whitespace characters and punctuation marks are handled. Note that this example shows an older LLM (GPT-3) - so trying this with a newer model may produce a different result.
@@ -87,7 +87,7 @@ Want to see how prompt-based completion works? Enter the above prompt into the A
87
87
88
88
But what if the user wanted to see something specific that met some criteria or task objective? This is where _instruction-tuned_ LLMs come into the picture.
As expected, each model (or model version) produces slightly different responses thanks to stochastic behavior and model capability variations. For instance, one model targets an 8th grade audience while the other assumes a high-school student. But all three models did generate responses that could convince an uninformed user that the event was real
> *(Click the image above to view video of this lesson)*
6
6
@@ -60,7 +60,7 @@ When building a chat application, a great first step is to assess what is alread
60
60
-**Easier maintenance**: Updates and improvements are easier to manage as most APIs and SDKs simply require an update to a library when a newer version is released.
61
61
-**Access to cutting edge technology**: Leveraging models that have been fined tuned and trained on extensive datasets provides your application with natural language capabilities.
62
62
63
-
Accessing functionality of an SDK or API typically involves obtaining permission to use the provided services, which is often through the use of a unique key or authentication token. We'll use the OpenAI Python Library to explore what this looks like. You can also try it out on your own in the [notebook](notebook.ipynb) for this lesson.
63
+
Accessing functionality of an SDK or API typically involves obtaining permission to use the provided services, which is often through the use of a unique key or authentication token. We'll use the OpenAI Python Library to explore what this looks like. You can also try it out on your own in the [notebook](./notebook.ipynb?WT.mc_id=academic-105485-koreyst) for this lesson.
64
64
65
65
```python
66
66
import os
@@ -87,11 +87,11 @@ General UX principles apply to chat applications, but here are some additional c
87
87
88
88
One such example of personalization is the "Custom instructions" settings in OpenAI's ChatGPT. It allows you to provide information about yourself that may be important context for your prompts. Here's an example of a custom instruction.
89
89
90
-

90
+

91
91
92
92
This "profile" prompts ChatGPT to create a lesson plan on linked lists. Notice that ChatGPT takes into account that the user may want a more in depth lesson plan based on her experience.
93
93
94
-

94
+

95
95
96
96
### Microsoft's System Message Framework for Large Language Models
Copy file name to clipboardexpand all lines: 08-building-search-applications/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Building a Search Applications
2
2
3
-
[](TBD)
3
+
[](TBD)
4
4
5
5
> **Video Coming Soon**
6
6
@@ -35,7 +35,7 @@ The lesson includes an Embedding Index of the YouTube transcripts for the Micros
35
35
36
36
The following is an example of a semantic query for the question 'can you use rstudio with azure ml?'. Check out the YouTube url, you'll see the url contains a timestamp that takes you to the place in the video where the answer to the question is located.
37
37
38
-

38
+

39
39
40
40
## What is semantic search?
41
41
@@ -154,7 +154,7 @@ Open the [solution notebook](./solution.ipynb?WT.mc_id=academic-105485-koreyst)
154
154
155
155
When you run the notebook, you'll be prompted to enter a query. The input box will look like this:
156
156
157
-

157
+

0 commit comments