pip3 install -r requirements.txt
Please create a file named .env
in the /RAG
directory and add the following content. Remember to replace <YOUR_API_KEY>
with your own openai api key and <YOUR_COHERE_KEY>
with your own cohere api key.
base_url=https://drchat.xyz
api_key=<YOUR_API_KEY>
embedding_model=text-embedding-3-small
llm=gpt4-1106-preview
vectordb=sionna_db
evaluator=gpt4-32k
cohere_key=<YOUR_COHERE_KEY>
reranker=rerank-english-v3.0
For data generation, open the parallel_request.py
file and locate the line:
os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>"
Replace <YOUR_API_KEY>
with your own OpenAI API key, as specified in the Create .env
section.
In the /DataPreparation directory, locate the .env
file, replacing <YOUR_API_KEY>
with your actual OpenAI API key.
For fine-tuning, open the model_creation.ipynb
and model_evaluation.ipynb
notebooks. Locate the line:
os.environ["AZURE_OPENAI_KEY"] = "<YOUR_AZURE_API_KEY>"
Replace <YOUR_AZURE_API_KEY> with your own Azure OpenAI key for fine-tuning.
The project comprises three main sections: data generation, instruction fine-tuning, and RAG.
- Data Generation: This section involves crawling, processing, and generating instruction-answer pairs for the Sionna dataset.
- Instruction Fine-Tuning: This section focuses on fine-tuning GPT models using the instruction-answer pairs generated in the data generation section.
- RAG: This section enhances the answer generation process using Retrieval-Augmented Generation (RAG).
For detailed experimental procedures and results, please refer to the associated paper.
To run the data generation section code, please navigate to the /DataPreparation
directory and execute the process.sh
script.
In the markdown
and chunk
directories, you will find:
- Markdown files: These files contains the information crawled by crawler.py from the Sionna official website.
- Cleaned and chunked JSONL files: These files are processed using the following files from the RAG directory:
/code/preprocess/clean.py
/code/preprocess/chunk.py
To execute the fine-tuning process, please follow the code in model_creation.ipynb
. The Sionna dataset is partitioned into training, validation, and testing datasets using split_IA_total_to_datasets.py
.
To execute the RAG process, begin by referring to run.sh
for data cleaning, chunking, and vector database creation. Subsequently, refer to run_gpt4-128k.sh
for the retriever and reranker steps.
Here is a demo video demonstrating how to run the network simulation using natural language with our NetLLM.
demo.mp4
How can you configure and compute propagation paths in a Munich scene using Sionna’s compute_paths function? Include details on setting up planar arrays for transmitters and receivers, adding a transmitter and a receiver to the scene, computing the paths, and visualizing the paths in a preview.
Our LLM provides step-by-step explanations with code snippets as shown in the Figure below:
We combine the code as follows:
import numpy as np
import sionna
from sionna.rt import load_scene, Transmitter, Receiver, PlanarArray, Camera
# Step 2: Load the Munich scene
scene = load_scene(sionna.rt.scene.munich)
# Step 3: Configure Antenna Arrays for Transmitters and Receivers
# Configure the antenna array for all transmitters
scene.tx_array = PlanarArray(num_rows=1, num_cols=1, vertical_spacing=0.5, horizontal_spacing=0.5, pattern="tr38901", polarization="V")
# Configure the antenna array for all receivers
scene.rx_array = PlanarArray(num_rows=1, num_cols=1, vertical_spacing=0.5, horizontal_spacing=0.5, pattern="dipole", polarization="cross")
tx = Transmitter(name="tx1", position=[8.5, 21, 27]) # Position needs to be adjusted based on the Munich scene coordinates
scene.add(tx)
# Create and add a receiver to the scene
rx = Receiver(name="rx1", position=[45, 90, 1.5], orientation=[0, 0, 0]) # Adjust position and orientation as needed
scene.add(rx)
# Make the transmitter point towards the receiver
tx.look_at(rx.position)
# Step 5: Set Up the Carrier Frequency
scene.frequency = 2.14e9 # 2.14 GHz; adjust as necessary
scene.synthetic_array = True # Optimized computations for large arrays
# Step 6: Compute Propagation Paths
paths = scene.compute_paths(max_depth=5, num_samples=1e6, los=True, reflection=True, diffraction=True, scattering=True) # Include all path interactions
# Step 7: Visualization
scene.preview(paths=paths, show_devices=True, show_paths=True)
Simulation Result:
The code is as follows:
# Import the necessary module
import sionna
# Import the function to load scenes
from sionna.rt import load_scene, Scene
# Load an example scene
scene = load_scene(sionna.rt.scene.etoile)
# If the scene name variable is not predefined and directly accessible,
# you could instead navigate through the sionna.rt module based on the
# structure of the Sionna package. This detail is assumed from
# the example usage pattern and might need adjustment based on the
# actual package structure and naming conventions.
scene.preview()
Simulation Result:
The code is as follows:
from sionna.rt import load_scene, Transmitter, Receiver, PlanarArray
# Load an example scene (e.g., Munich scene)
scene = load_scene(sionna.rt.scene.munich)
scene.tx_array = PlanarArray(num_rows=8,
num_cols=2,
vertical_spacing=0.7,
horizontal_spacing=0.5,
pattern="tr38901",
polarization="VH")
# Configure antenna array for all receivers in the scene
scene.rx_array = PlanarArray(num_rows=1,
num_cols=1,
vertical_spacing=0.5,
horizontal_spacing=0.5,
pattern="dipole",
polarization="cross")
tx = Transmitter(name="tx",
position=[8.5, 21, 27],
orientation=[0, 0, 0])
scene.add(tx)
# Create a receiver object and add it to the scene
rx = Receiver(name="rx",
position=[45, 90, 1.5],
orientation=[0, 0, 0])
scene.add(rx)
# Compute paths with specific parameters
paths = scene.compute_paths(max_depth=3,
method="fibonacci",
num_samples=int(1e6),
los=True,
reflection=True,
diffraction=False,
scattering=False,
check_scene=True)
scene.preview(paths=paths, resolution=[1000, 600])
Simulation Result:
The code is as follows:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
a, tau = paths.cir()
# Provided compute_gain function
def compute_gain(a, tau):
"""Compute |H(f)|^2 at f = 0 where H(f) is the baseband channel frequency response"""
a = tf.squeeze(a, axis=-1)
h_f_2 = tf.math.abs(tf.reduce_sum(a, axis=-1))**2
h_f_2 = tf.where(h_f_2==0, 1e-24, h_f_2)
g_db = 10*np.log10(h_f_2)
return tf.squeeze(g_db)
print(a.shape)
n = 400
plt.figure()
plt.stem(tau[0,0,0,:]/1e-9, 10*np.log10(np.abs(a[0,0,0,0,0,:,0])**2), basefmt=" ")
plt.title(f"Angle of receiver")
plt.xlabel("Delay (ns)")
plt.ylabel("|a|^2 (dB)")
plt.show()
# # Example usage of compute_gain (optional here) for receiver at index n
# g_db_n = compute_gain(a[n:n+1], tau[n:n+1])
# print(f"Computed gain for the receiver {n}: {g_db_n.numpy()} dB")
Simulation Result:
The code is as follows:
import sionna
from sionna.rt import load_scene, PlanarArray, Transmitter, Receiver, Camera
# Assuming you have the Sionna package installed and the necessary dependencies,
# load the Munich scene from the Sionna Ray Tracing module
scene = load_scene(sionna.rt.scene.munich)
# Configure the antenna array for all transmitters
scene.tx_array = PlanarArray(num_rows=8,
num_cols=2,
vertical_spacing=0.7,
horizontal_spacing=0.5,
pattern="tr38901",
polarization="VH")
# Configure the antenna array for all receivers
scene.rx_array = PlanarArray(num_rows=1,
num_cols=1,
vertical_spacing=0.5,
horizontal_spacing=0.5,
pattern="dipole",
polarization="cross")
# Add a transmitter
tx = Transmitter(name="tx",
position=[8.5, 21, 30], # Position adjusted for the example
orientation=[0, 0, 0])
scene.add(tx)
tx.look_at([40, 80, 1.5])
# Compute the coverage map with specified parameters
cm = scene.coverage_map(cm_cell_size=[1.,1.], # Size of each cell in meters
num_samples=int(10e6)) # Number of rays to trace
# Fit the camera to view the whole scene. This step configures the camera
# for visualization and is adapted from the context to fit within this example
tx_pos = scene.transmitters["tx"].position.numpy()
bird_pos = tx_pos.copy()
bird_pos[-1] += 100 # Height above the transmitter
scene.add(Camera("birds_view", position=bird_pos, look_at=tx_pos))
# Visualize the coverage map in 2D for the first transmitter
cm.show(tx=0)
Simulation Result:
The code is as follows:
# Open 3D preview (only works in Jupyter notebook)
scene.preview(coverage_map=cm)
Simulation Result:
The NetLLM module and simulator is developed and maintained by:
-
Jiewen Liu (NCSU)
-
Zhiyuan Peng (NCSU)
-
Yuchen Liu (NCSU)
-
NCSU Research Funds
-
Microsoft Accelerating Foundation Models Research Funds
@inproceedings{Liu2024revolu,
title={Revolutionizing Wireless Modeling and Simulation with Network Oriented LLMs},
author={Liu, J. and Peng, Z. and Xu, D. and Liu, Y.},
booktitle={43rd IEEE International Performance Computing and Communications Conference},
year={2024}
}