Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
Update LICENSE
Browse files Browse the repository at this point in the history
Improved language to be more explicit about executing a MSLSA.
  • Loading branch information
jeanniefinks authored Jul 16, 2024
1 parent 6ee01ec commit c12a3af
Showing 1 changed file with 10 additions and 64 deletions.
74 changes: 10 additions & 64 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,79 +1,25 @@
Updated: July 15, 2024
Updated: July 16, 2024

NEURAL MAGIC ENTERPRISE LICENSE AGREEMENT

nm-vllm is an enterprise inferencing system for deployments of open-source large language
models (LLMs) on hardware accelerators.

The collective works, and packages based on it, is licensed under the Neural Magic Enteprise
License Agreement, as described below.
The collective works, and packages based on it, is licensed under an executed Neural Magic
Enteprise License Agreement (“Agreement”), as described in Neural Magic's complete Agreement
at: https://neuralmagic.com/legal/master-software-license-and-service-agreement/.

Please read this Neural Magic Enteprise License Agreement (“Agreement”)
carefully before using Software for the first time. By using the Software, you agree
to these terms. If you do not agree to the terms of this Agreement, the Software may
not be installed on your computer systems.
The Neural Magic Enteprise License Agreement must be executed prior to using Software for the
first time. By using the Software, you agree to these terms. If you fail to execute to this
Agreement, the Software may not be installed, accessed or distributed. Your rights under the
Agreement will terminate automatically without notice if you fail to comply with any term(s) of
the Agreement.

Source code and object files (“Software”) consist of a current version of: (a) machine
executable (object) code, (b) Neural Magic-branded components, and (c) any existing user guide
and release notes. The Software is copyrighted by Neuralmagic, Inc. (“Neural Magic”), a Delaware
corporation with offices at 55 Davis Square STE 3, Somerville, MA 02144 United States, and is
licensed and not sold. Licensing questions about this Software should be directed to
https://www.neuralmagic.com/contact.

1. License. Subject to the terms of this Agreement, Neural Magic grants you an
enterprise license to use the Software solely for production, commercial
applications. You will ensure that anyone who obtains and uses the Software does so only
in compliance with the terms of this Agreement. All other rights in and to the Software are
hereby reserved. We may terminate your right to use the Software at any time upon notice to you.

2. Restrictions. As the source code of the Software is confidential, you may not
decompile, reverse engineer, disassemble, or otherwise attempt to derive the source
code of the Software for any purpose. Except as permitted by applicable law and this
Agreement, you may not sublicense the Software. You may not use or otherwise export
the Software except as authorized by United States law. You may also not modify,
translate, or create derivative works of the Software or) sell, lease, license,
sublicense, market or distribute the Software or use the Software for any timesharing,
service bureau, subscription, rental or similar uses without the express prior written
consent of Neural Magic in each instance or use the Software on behalf of any third party.

3. No Warranty. The Software is provided to you "AS IS" and without warranty. You are not
entitled to any hard copy documentation, maintenance, support or updates for the Software
unless you have entered into a separate commercial agreement under which support
obligations are specified.

WE EXPRESSLY DISCLAIM ALL WARRANTIES RELATED TO THE SOFTWARE, EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE. WE DO NOT WARRANT THAT THE FUNCTIONS CONTAINED IN THE SOFTWARE WILL
MEET YOUR REQUIREMENTS, OR THAT THE OPERATION OF THE SOFTWARE WILL BE UNINTERRUPTED OR
ERROR-FREE, OR THAT DEFECTS IN THE SOFTWARE WILL BE CORRECTED. WE DO NOT WARRANT OR MAKE
ANY REPRESENTATIONS REGARDING THE USE OR THE RESULTS OF THE USE OF THE SOFTWARE OR RELATED
DOCUMENTATION IN TERMS OF THEIR CORRECTNESS, ACCURACY, RELIABILITY OR OTHERWISE. SOME
JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO PORTIONS OF THE ABOVE
EXCLUSION MAY NOT APPLY TO YOU.

4. Limitation of Liability. In no event shall we be liable to you for any damages with
respect to your use of the Software. Further:

UNDER NO CIRCUMSTANCES, INCLUDING NEGLIGENCE, SHALL WE BE LIABLE FOR ANY INCIDENTAL,
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATING TO THIS LICENSE,
INCLUDING, BUT NOT LIMITED TO, DAMAGES RESULTING FROM ANY LOSS OF DATA CAUSED BY THE
SOFTWARE. SOME JURISDICTIONS DO NOT ALLOW THE LIMITATION OF INCIDENTAL OR CONSEQUENTIAL
DAMAGES SO THIS LIMITATION MAY NOT APPLY TO YOU.

5. Controlling Law and Severability. This Agreement shall be governed by the laws of the
United States and those of the Commonwealth of Massachusetts. If for any reason a court of
competent jurisdiction finds any provision, or portion thereof, to be unenforceable, the
remainder of this Agreement shall continue in full force and effect. Any dispute regarding
this Agreement shall be resolved in either the state or federal courts in Massachusetts.

6. Complete Agreement. This Agreement constitutes the entire agreement between us with
respect to the version of the Software license hereunder. No amendment to or modification
of this Agreement will be binding unless in writing.

Your rights under this Agreement will terminate automatically without notice if you fail to
comply with any term(s) of this Agreement.

Review Neural Magic's complete Agreement at:
https://neuralmagic.com/legal/master-software-license-and-service-agreement/
https://www.neuralmagic.com/contact.

Neuralmagic, Inc.’s Legal Policies https://www.neuralmagic.com/legal

2 comments on commit c12a3af

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smaller_is_better

Benchmark suite Current: c12a3af Previous: 9daca33 Ratio
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA H100 80GB HBM3 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 42.122373611976705 ms 41.78515797636161 ms 1.01
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA H100 80GB HBM3 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 7.719415608300389 ms 7.65132902022198 ms 1.01
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA H100 80GB HBM3 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 31.98239931371063 ms 31.12246162102868 ms 1.03
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA H100 80GB HBM3 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 11.661349734444276 ms 11.275258451583312 ms 1.03

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smaller_is_better

Benchmark suite Current: c12a3af Previous: 9daca33 Ratio
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 24.216184713326736 ms 24.109170690001445 ms 1.00
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 6.107371292128377 ms 6.170554364323985 ms 0.99
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 187.23055097666776 ms 181.7405042766662 ms 1.03
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 84.8932194870129 ms 84.98686818740985 ms 1.00

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.