Introduction
Introduction
When we discuss coding language models (LLMs) and natural language (NL) language models comparatively, such as Llama3 vs. CodeLlama, we could readily identify some distinctions. In fact, coding LLMs are significantly more challenging to develop and work with compared to NL LLMs for the following reasons.
- Precision and Syntax Sensitivity: Code is a formal language with strict syntax rules and structures. A minor error, such as a misplaced bracket or a missing semicolon, can lead to errors that prevent the code from functioning. This requires the LLM to have a high degree of precision and an understanding of syntactic correctness, which is generally more stringent than the flexibility seen in natural language.
- Execution Semantics: Code not only needs to be syntactically correct, but it also has to be semantically valid—that is, it needs to perform the function it is supposed to do. Unlike natural language, where the meaning can be implicitly interpreted and still understood even if somewhat imprecisely expressed, code execution needs to yield very specific outcomes. If a code LLM gets the semantics wrong, the program might not work at all or might perform unintended operations.
- Context and Dependency Management: Code often involves multiple files or modules that interact with each other, and changes in one part can affect others. Understanding and managing these dependencies and contexts is crucial for a coding LLM, which adds a layer of complexity compared to handling standalone text in natural language.
- Variety of Programming Languages: There are many programming languages, each with its own syntax, idioms, and usage contexts. A coding LLM needs to potentially handle multiple languages, understand their unique characteristics, and switch contexts appropriately. This is analogous to a multilingual NL LLM but often with less tolerance for error.
- Data Availability and Diversity: While there is a vast amount of natural language data available from books, websites, and other sources, high-quality, annotated programming data can be more limited. Code also lacks the redundancy and variability of natural languages, which can make training more difficult.
- Understanding the Underlying Logic: Writing effective code involves understanding algorithms and logic. This requires not only language understanding but also computational thinking, which adds an additional layer of complexity for LLMs designed to generate or interpret code.
- Integration and Testing Requirements: For a coding LLM, the generated code often needs to be tested to ensure it works as intended. This involves integrating with software development environments and tools, which is more complex than the generally self-contained process of generating text in natural language.
Each of these aspects makes the development and effective operation of coding LLMs a challenging task, often requiring more specialized knowledge and sophisticated techniques compared to natural language LLMs.
The deployment and life-cycle management of a LLM-serving API is challenging because of the autoregressive nature of the transformer-based generation algorithm. For code LLM, the problem is more acute for the following reasons:
- Real-Time Performance: In many applications, coding LLMs are expected to provide real-time assistance to developers, such as for code completion, debugging, or even generating code snippets on the fly. Meeting these performance expectations requires highly efficient models and infrastructure to minimize latency, which can be technically challenging and resource-intensive.
- Scalability and Resource Management: Code generation tasks can be computationally expensive, especially when handling complex codebases or generating lengthy code outputs. Efficiently scaling the service to handle multiple concurrent users without degrading performance demands sophisticated resource management and possibly significant computational resources. Also, the attention computation in the inference time takes quadratic time complexity with respect to the input. Often, the input sequence length for the code models are significantly higher than the NL models.
- Context Management: Effective code generation often requires understanding not just the immediate code snippet but also broader project contexts, such as libraries used, the overall software architecture, and even the specific project's coding standards. Maintaining and accessing this contextual information in a way that is both accurate and efficient adds complexity to the serving infrastructure.
- 5. Security Concerns: Serving a coding LLM involves potential security risks, not only in terms of the security of the model itself (e.g., preventing unauthorized access) but also ensuring that the code it generates does not introduce security vulnerabilities into user projects. Ensuring both model and output security requires rigorous security measures and constant vigilance.
In summary, code LLMs are much harder to train and deploy for inference than NL LLMs. In this article, we cover an API benchmarking for a code generation developed entirely on Nutanix infrastructure.
Code Generation Workflow
Code Generation Workflow
Figure 1 shows an LLM-assisted code generation workflow. It combines a context with a prompt with a prompt template to generate the input sequence to a large language model (LLM). Then, the LLM generates the output which is passed to the evaluation system. If the output is satisfactory, the user can revise the prompt, prompt template, and LLM used. Figure 1 shows the taxonomy for the LLM-assisted code generation workflow.
Nutanix Cloud Platform
Nutanix Cloud Platform
At Nutanix, we are dedicated to enabling customers to build and deploy intelligent applications anywhere—edge, core data centers, service provider infrastructure, and public clouds. Figure 2 shows how AI/ML is integrated into the core Nutanix® infrastructure layer.
As shown in Figure 2, the App layer runs on the top of the infrastructure layer. The infrastructure layer can be deployed in two steps, starting with Prism Element™ login followed by VM resource configuration. Figure 3 shows the UI for the Prism Element controller.
After logging into Prism Element, we create a virtual machine (VM) hosted on our Nutanix AHV® cluster. As shown in Figure 4, the VM has following resource configuration settings: 22.04 Ubuntu® operating system, 16 single core vCPUs, 64 GB of RAM, and NVIDIA® A100 tensor core passthrough GPU with 40 GB memory. The GPU is installed with the NVIDIA RTX 15.0 driver for Ubuntu OS (NVIDIA-Linux-x86_64-525.60.13-grid.run). The large deep learning models with transformer architecture require GPU or other compute accelerators with high memory bandwidth, large registers and L1 memory.
The NVIDIA A100 Tensor Core GPU is designed to power the world’s highest-performing elastic datacenters for AI, data analytics, and HPC. Powered by the NVIDIA Ampere™ architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands.
To peek into the detailed features of A100 GPU, we run `nvidia-smi` command which is a command line utility, based on top of the NVIDIA Management Library (NVML), and intended to aid in the management and monitoring of NVIDIA GPU devices. The output of the `nvidia-smi` command is shown in Figure 6. It shows the Driver Version to be 515.86.01 and CUDA version to be 11.7. Figure 5 shows several critical features of the A100 GPU we used. The details of these features are described in Table 1.
Benchmarking Hypotheses
We aim to measure the impact on the code generation latency from code size, input token size, output token size, and different code generation tasks across a large sample size. For the benchmarking, it is instructive to choose the right code datasets. There are benchmarking datasets such as HumanEval, MBPP, APPS, Multiple-E, and GSM8K. For this article, we choose the Mostly Basic Programming Problem (MBPP) dataset (https://arxiv.org/abs/2108.07732), which consists of 974 programming tasks designed to be solvable by entry-level programmers. For the code LLM API, we have used CodeLlama-7b-instruct (https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf). The API server was implemented using FastAPI (https://fastapi.tiangolo.com/).
The MBPP dataset has the following structure, as shown in Table 3. It has 974 rows and 6 different features such as task_id, text, code, test_list, test_setup_code, challenge_test_list.
- We used CodeBLEU (https://arxiv.org/abs/2009.10297) to measure the fidelity of the code generation and test generation with respect to the reference data provided in the MBPP dataset, respectively. CodeBLEU is specifically designed for handling code data unlike the traditional BLEU score.
- We measured the latency for each of the requests and compared it with the corresponding input/output token counts. Specifically, we measured the following metrics:
- Time to First Byte (TTFB): Time to first byte (TTFB) is a measurement used as an indicator of the responsiveness of the API. TTFB measures the duration from the user or client making an HTTP request to the first byte being received by the client.
- Time to Last Byte (TTLB): Time to last byte (TTLB) is a measurement used as an indicator of the responsiveness of the API. TTLB measures the duration from the user or client making an HTTP request to the last byte being received by the client.
- Input Token Count: The number of tokens in the API call query.
- Output Token Count: The number of tokens in the API call response.
- We investigated whether CodeBLEU is related to the TTFB, TTLB, input token count, and output token count.
Results
Results
Fidelity Benchmarking
Code Generation Use Case
Figure 6 shows the CodeBLEU score for code generation tasks in the MBPP dataset. It shows a reasonable Pass@1 accuracy as reported in the seminal CodeLlama paper (https://arxiv.org/abs/2308.12950).
Test Generation Use Case
Figure 7 shows the CodeBLEU score for test generation tasks in the MBPP dataset. It shows a reasonable Pass@1 accuracy as reported in the seminal CodeLlama paper (https://arxiv.org/abs/2308.12950).
Latency and Token Count Benchmarking
Code Generation
In this use case, a text string is passed in the API query and a code body is returned as the API response.
Figure 8 shows the correlation matrix among {TTFB, TTLB, Input token count, Output token count, CodeBLEU score} for the code generation use case. We can make following observations from Figure 8:
- TTLB and Output Token Count have a fairly high correlation score of 0.36 compared to other pairs.
- Input Token Count (text) and Output Token Count (code) have a correlation score of 0.18 which is quite obvious. Here we are sending the text body as the input and receiving the code body in the response. In most cases, it is fair to expect that a longer input text will return a longer code block.
- CodeBLEU score hardly has any correlation with other factors.
From Figure 8, it appears that the relationship between output token count and TTLB is worth further scrutiny. Figure 9 shows the jointplot between time to last byte (TTLB) (s) and output token count. It clearly shows that TTLB increases with output token count. This proportionality can be explained by the fact that LMM generates one token at a time.
Test Generation
In this use case, a code body is passed in the API query and a code body is returned as the API response.
Figure 10 shows the correlation matrix among {TTFB, TTLB, Input token count, Output token count, Code line count, CodeBLEU score} for the test generation use case. We can make following observations from Figure 10:
- TTLB and Output Token Count have a fairly high correlation score of 0.86 compared to other pairs.
- For test generation, we have an additional field: code line count field which is quite obviously highly correlated with the input token count (the correlation score of 0.92).
- Input Token Count (code) and Output Token Count (code) have a correlation score of 0.15 which is quite obvious. A longer code body generally should have a longer test code body, in general.
From Figure 10, it appears that the relationship between output token count and TTLB is worth further scrutiny. Figure 11 shows the jointplot between time to last byte (TTLB) (s) and output token count. It clearly shows that TTLB increases with output token count. This proportionality can be explained by the fact that LMM generates one token at a time.
Empirical Best Practices/Insights for Code Generation and Unit Test Generation
Empirical Best Practices/Insights for Code Generation and Unit Test Generation
- For both code generation and test generation use cases, the response time varies proportionally with the output token count.
- CodeBLEU score remains relatively invariant with input/output token count.
- On average, the response times for both use cases vary between 0 and 20 s.
Impact
Impact
GitHub Copilot has delivered massive productivity gain–as high as 55%-- for developers across the verticals (Link). In fact, its economic impact is poised to grow beyond $1.5T in the next few years (Link). With the rapid advancement of the HuggingFace and Llama ecosystems, open large language models (LLMs) are experiencing significant progress, thereby making LLM application development accessible to typical enterprises.
In this macro-economic climate, potential benefits of developing open source LLM-based code assistants is humongous, but the evaluation of these code assistants is often challenging because of the infrastructure management, data privacy, and dependency management.
As you contemplate how AI will change your business, Nutanix GPT-in-a-Box 2.0 makes getting started with GenAI a snap to deploy real use cases and solutions built on standard hardware without the need for a special architecture. As demonstrated, LLMs change fast, and with Nutanix, you can stay ahead of the curve with a secure, full-stack platform to run GenAI data and apps anywhere enhancing developer productivity.
© 2024 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product, feature and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. Other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). This post may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such a site. Certain information contained in this post may relate to or be based on studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this post, they have not independently verified, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from third-party sources.
This post may contain express and implied forward-looking statements, which are not historical facts and are instead based on our current expectations, estimates and beliefs. The accuracy of such statements involves risks and uncertainties and depends upon future events, including those that may be beyond our control, and actual results may differ materially and adversely from those anticipated or implied by such statements. Any forward-looking statements included herein speak only as of the date hereof and, except as required by law, we assume no obligation to update or otherwise revise any of such forward-looking statements to reflect subsequent events or circumstances