mirror of
https://github.com/fauxpilot/fauxpilot.git
synced 2025-03-12 04:36:10 -07:00
The main change is that the model config format has changed. To deal with this we have a new script in the converter that will upgrade a model to the new version. Aside from that, we also no longer need to maintain our own fork of Triton since they have fixed the bug with GPT-J models. This should make it a lot easier to stay synced with upstream (although we still have to build our own container since there doesn't seem to be a prebuilt Triton+FT container hosted by NVIDIA). Newer Triton should let us use some nice features: - Support for more models, like GPT-NeoX - Streaming token support (this still needs to be implemented in the proxy though) - Dynamic batching Still TODO: - Proxy support for streaming tokens - Add stuff to setup.sh and launch.sh to detect if a model upgrade is needed and do it automatically.
7 lines
112 B
Plaintext
7 lines
112 B
Plaintext
fastapi==0.82.0
|
|
numpy==1.23.2
|
|
sse-starlette==1.1.6
|
|
tokenizers==0.12.1
|
|
tritonclient[all]==2.29.0
|
|
uvicorn==0.18.3
|