Skip to content
Merged
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
6a57cea
Reverting models to make sure calls to the simulator work
nagkumar91 Mar 7, 2024
dd01ecf
merge
nagkumar91 Mar 7, 2024
8cea9c3
quotes
nagkumar91 Mar 7, 2024
bea237e
Spellcheck fixes
nagkumar91 Mar 7, 2024
45073cc
ignore the models for doc generation
nagkumar91 Mar 7, 2024
08af5b3
Fixed the quotes on f strings
nagkumar91 Mar 7, 2024
7584cc9
pylint skip file
nagkumar91 Mar 7, 2024
e10fe6f
Merge branch 'Azure:main' into main
nagkumar91 Mar 7, 2024
304d506
Merge branch 'Azure:main' into main
nagkumar91 Mar 11, 2024
d727177
Support for summarization
nagkumar91 Mar 11, 2024
8b895ee
Adding a limit of 2 conversation turns for all but conversation simul…
nagkumar91 Mar 11, 2024
92d6d8e
exclude synthetic from mypy
nagkumar91 Mar 11, 2024
4742b04
Another lint fix
nagkumar91 Mar 11, 2024
975b0b3
Skip the file causing linting issues
nagkumar91 Mar 12, 2024
a00871f
Merge branch 'Azure:main' into main
nagkumar91 Mar 12, 2024
6bf1de0
Bugfix on output to json_qa_lines and empty response from callbacks
nagkumar91 Mar 13, 2024
5a974ce
Merge branch 'main' into main
nagkumar91 Mar 13, 2024
3f9c000
Skip pylint
nagkumar91 Mar 13, 2024
5ab6ab2
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 Mar 13, 2024
0c76fb0
Add if/else on message to eval json util
nagkumar91 Mar 14, 2024
fad8599
Merge branch 'Azure:main' into main
nagkumar91 Mar 21, 2024
a6d8d0f
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 Mar 25, 2024
a1e9c9d
adding max_simulation_results for sync call
nagkumar91 Mar 25, 2024
10ba426
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 Mar 25, 2024
3634d7c
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 Apr 1, 2024
1cbc55c
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 May 23, 2024
ac39d75
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 Jun 12, 2024
85e91bf
Merge branch 'main' of https://github.com/nagkumar91/azure-sdk-for-py…
nagkumar91 Jun 13, 2024
503634d
Bugfix: None was being added to the end of the output path
nagkumar91 Jun 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Spellcheck fixes
  • Loading branch information
nagkumar91 committed Mar 7, 2024
commit bea237e466286888f05d0ec685eaa2082fd88332
Original file line number Diff line number Diff line change
Expand Up @@ -734,7 +734,7 @@ def _parse_response(self, response_data: dict, request_data: dict) -> dict: # t
prompt = request_data["input_data"]["input_string"][0]

# remove prompt text from each response as llama model returns prompt + completion instead of only completion
# remove any text after the stop tokens, since llama doesn"t support stop token
# remove any text after the stop tokens, since llama does not support stop token
for idx, response in enumerate(response_data["samples"]):
response_data["samples"][idx] = response_data["samples"][idx].replace(prompt, "").strip()
for stop_token in self.stop:
Expand All @@ -761,7 +761,7 @@ class LLAMAChatCompletionsModel(LLAMACompletionsModel):
"""
LLaMa ChatCompletionsModel is a wrapper around LLaMaCompletionsModel that
formats the prompt for chat completion.
This chat completion model should be only used as assistant, and shouldn"t be used to simulate user. It is not possible
This chat completion model should be only used as assistant, and should not be used to simulate user. It is not possible
to pass a system prompt do describe how the model would behave, So we only use the model as assistant to reply for questions
made by GPT simulated users.
"""
Expand All @@ -780,8 +780,9 @@ def format_request_data(self, messages: List[dict], **request_params): # type:
captions=self.image_captions,
)

# For LLaMa we don"t pass the prompt (user persona) as a system message since LLama doesn"t support system message
# LLama only supports user, and assistant messages. The messages sequence has to start with User message/ It can"t have two user or
# For LLaMa we do not pass the prompt (user persona) as a system message since LLama does not support system message
# LLama only supports user, and assistant messages.
# The messages sequence has to start with User message/ It can not have two user or
# two assistant consecutive messages.
# so if we set the system meta prompt as a user message, and if we have the first two messages made by user then we
# combine the two messages in one message.
Expand Down