Skip to content
Toggle navigation
Toggle navigation
This project
Loading...
Sign in
Administrator
/
chatgpt.ai-pro.org
Go to a project
Toggle navigation
Toggle navigation pinning
Projects
Groups
Snippets
Help
Project
Activity
Repository
Pipelines
Graphs
Issues
0
Merge Requests
3
Wiki
Network
Create a new issue
Builds
Commits
Issue Boards
Files
Commits
Network
Compare
Branches
Tags
94593dd1
authored
2023-02-22 04:38:45 +0000
by
Ryan
Browse Files
Options
Browse Files
Tag
Download
Plain Diff
Merge branch 'jeff_fix_maxtoken' into 'master'
jeff_fix_maxtoken See merge request
!27
2 parents
5d77d99f
4e4163e2
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
34 additions
and
15 deletions
index.js
index.js
View file @
94593dd
...
...
@@ -36,24 +36,43 @@ app.post('/api', async (req, res) => {
let
greetingPrompt
=
'Hello, how can I assist you?'
const
greetings
=
[
'hi'
,
'hello'
,
'hey'
]
if
(
greetings
.
some
((
greeting
)
=>
message
.
toLowerCase
().
includes
(
greeting
)))
{
greetingPrompt
=
'Hello, how can I help you today?'
}
const
prompt
=
`
${
greetingPrompt
}
\n
${
message
}
`
;
const
response
=
await
openai
.
createCompletion
({
model
:
`
${
currentModel
}
`
,
// "text-davinci-003",
prompt
,
max_tokens
:
2500
,
temperature
,
});
res
.
json
({
message
:
response
.
data
.
choices
[
0
].
text
,
})
let
query_prompt
=
`
${
greetingPrompt
}
\n
${
message
}
`
;
str_length
=
req
.
body
.
message
.
split
(
' '
).
length
;
if
(
str_length
>=
800
){
arr_body
=
req
.
body
.
message
.
split
(
"\n"
);
if
(
arr_body
.
length
>=
4
){
var
i
=
arr_body
.
length
-
2
while
(
i
--
)
{
arr_body
.
splice
(
i
,
1
);
}
query_prompt
=
arr_body
.
join
(
"\n"
)
}
}
try
{
const
response
=
await
openai
.
createCompletion
({
model
:
`
${
currentModel
}
`
,
// "text-davinci-003",
prompt
:
query_prompt
,
max_tokens
:
3000
,
temperature
,
});
res
.
json
({
message
:
response
.
data
.
choices
[
0
].
text
,
})
}
catch
(
e
)
{
let
error_msg
=
e
.
response
.
data
.
error
.
message
?
e
.
response
.
data
.
error
.
message
:
''
;
if
(
error_msg
.
indexOf
(
'maximum context length'
)
>=
0
){
res
.
json
({
message
:
"The output for your prompt is too long for us to process. Please reduce your prompt and try again."
,
})
}
else
{
console
.
log
(
e
.
response
);
}
}
finally
{
// console.log('We do cleanup here');
}
});
// Get Models Route
...
...
Write
Preview
Styling with
Markdown
is supported
Attach a file
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to post a comment