Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with pluto #35

Open
LunaWuna opened this issue Dec 20, 2022 · 31 comments
Open

Problems with pluto #35

LunaWuna opened this issue Dec 20, 2022 · 31 comments

Comments

@LunaWuna
Copy link

LunaWuna commented Dec 20, 2022

# cd /lib/firmware; echo system_top.bit.bin > /sys/class/fpga_manager/fpga0/firmware
# echo 79024000.cf-ad9361-dds-core-lpc > /sys/bus/platform/drivers/cf_axi_dds/unbind
# echo 79020000.cf-ad9361-lpc > /sys/bus/platform/drivers/cf_axi_adc/unbind
# echo 7c400000.dma > /sys/bus/platform/drivers/dma-axi-dmac/unbind
# echo 7c420000.dma > /sys/bus/platform/drivers/dma-axi-dmac/unbind
# echo 7c420000.dma > /sys/bus/platform/drivers/dma-axi-dmac/bind
# echo 7c400000.dma > /sys/bus/platform/drivers/dma-axi-dmac/bind
# echo 79024000.cf-ad9361-dds-core-lpc > /sys/bus/platform/drivers/cf_axi_dds/bind
# echo 79020000.cf-ad9361-lpc > /sys/bus/platform/drivers/cf_axi_adc/bind
sh: write error: No such device

this only happens with the new system_top.bit.bin and the same thing happens when compiling or using the release build. unbinding and rebinding without the new FPGA firmware works fine.

@LunaWuna
Copy link
Author

LunaWuna commented Dec 20, 2022

oops, i messed up formatting, editing to fix

@LunaWuna
Copy link
Author

Update: I installed the original pluto firmware and that doesnt have any errors. however when I start an eNB, it errors back to Error reading rx ringbuffer. Invalid header again.

@LunaWuna LunaWuna changed the title error when unbiding and rebinding, sh: write error: No such device Problems with pluto Dec 20, 2022
@ofontbach
Copy link
Collaborator

Hi @LunaWuna,

Sorry, could you please provide more details as what steps you followed leading to the error you are reporting?

To give you some context, we have tried building the timestmaping solution from the repo from scratch (i.e., FPGA image and software by following the appnote) and we don't seem to get errors when booting or using the pluto (i.e., we did a successful tx-rx test and also played a little with pdsch_enb/pdsch_ue).

As for the second part of your message, if I get correctly what you mean, of course if you use the original FPGA bitstream shipped with the board it won't include the FPGA side of the timestamping solution (which currently attaches the timestamps as metadata to each I/Q sample packet exchanged through DMA) and it is to be expected that our driver will complain that the expected header is not there.

@LunaWuna
Copy link
Author

hello,
Ive managed to get the txrx test and pdsch_enb to work but on starting srsenb using sudo LD_LIBRARY_PATH=./bin_app nice -20 ./bin_app/srsenb/srsenb ./bin_app/srsenb/enb.conf it starts normally but after a couple seconds it sends Error reading rx ringbuffer. Invalid header to the console and doesnt transmit any signal.

here is what it says:
image

@ofontbach
Copy link
Collaborator

Hi @LunaWuna,

it's great to see that you managed to get the txrx and pdsch_enb to work with the pluto. Did you simply update your local clone to the latest repo code or did you find a workaround to your issues? In the latest case, would you mind sharing for the benefit of the community?

As for the enb, I'm afraid we haven't really tested on the pluto (as we mostly tested bigger/newer zynqs, capable of accelerating more PHY DSP in the FPGA too). We know that the zynq on the pluto presents a few bottlenecks (cpu power, usb interface, small fpga area) that are capping the attainable performance. The pdsch_enb only has traffic in one direction, which mitigates the effects of the bottlenecks. As for the enb, my assumption is simply that the board can't keep up with the traffic + processing load. Maybe with some effort and low-level optimizations of the code here and there, some fine tunning of DMA packet sizes, etc. you'd get it to work somewhat more stably.

@LunaWuna
Copy link
Author

I've still been fiddling around more and still am at the same point. Still no signal being transmitted at all when listening with another SDR and that error message comes up. Also when trying to transmit a signal just with gnuradio and the bitstream loaded it wont transmit anything but I dont know if thats expected or not.

Error wise, I was running a pluto plus and just ran the regular pluto software on it and that seemed to fix the sh: write error: No such device when loading the bitstream.

@ArielRFF
Copy link

Hello,
I am also running zynq_timestamping on Pluto+ and get the same error message as LunaWuna when trying to execute srsenb:
Error reading rx ringbuffer. Invalid header

Note that it is running with 6 PRBs only. This should reduce Zynq CPU load drastically.
Any ideas to debug this error ?

Error reading rx ringbuffer

@LunaWuna
Copy link
Author

Yeah, I was running with 6 PRBs. The Pluto should definitely have enough bandwidth to not drop samples at that bitrate iirc. I have no idea if this is a Pluto problem or a Pluto Plus problem.

I wonder if it would be possible to build a bitstream for the Pluto Plus firmware since there is a few changes needed. (Look in the Pluto Plus FW repo)

@ArielRFF
Copy link

ArielRFF commented Feb 3, 2023

@LunaWuna
I loaded Pluto firmware V.0.34 instead of using Pluto+ default firmware. Didn´t solve the problem, still getting "Error reading rx ringbuffer. Invalid header".
Did you try it ?

@ofontbach
I am running srsenb with 6 PRBs only. Shouldn't this reduce Zynq CPU load ?

@ofontbach
Copy link
Collaborator

Hi @ArielRFF,

Yes, 6 PRBs should provide the minimum workload of srsenb. Still, as I mentioned, testing on our end with Pluto has not covered srsenb (but tx-rx and a limited validation of the pdsch_enb). I still believe that the pluto has many bottlenecks that might have a negative impact on the srsenb performance (e.g., to give you a hint, with newer + bigger zynqs 6 PRBs was the most we could run for the srsUE without accelerating at least part of the PHY in the FPGA).

I'm sure you've already done it, but in case not, I'd recommend compiling in release mode and making sure that your cpu governor is set to performance mode. Also, make sure to use the lower level of logging possible or disabling it to enhance the performance.

@uptools
Copy link

uptools commented Feb 5, 2023

Hi @ArielRFF,

Yes, 6 PRBs should provide the minimum workload of srsenb. Still, as I mentioned, testing on our end with Pluto has not covered srsenb (but tx-rx and a limited validation of the pdsch_enb). I still believe that the pluto has many bottlenecks that might have a negative impact on the srsenb performance (e.g., to give you a hint, with newer + bigger zynqs 6 PRBs was the most we could run for the srsUE without accelerating at least part of the PHY in the FPGA).

I'm sure you've already done it, but in case not, I'd recommend compiling in release mode and making sure that your cpu governor is set to performance mode. Also, make sure to use the lower level of logging possible or disabling it to enhance the performance.

When using iio_buffers larger than 5000 samples, pluto seems to work ok with the sample rate of 1.92Msps (enough for 6 prb's) TX and RX via the USB.
Although this introduces a minimum buffering delay of 2.6msecs.

When running locally (executed by the zynq) it can keep with much higher sample rates. But pluto's zynq 7010 can only run small applications.

@ofontbach
Copy link
Collaborator

ofontbach commented Feb 7, 2023

Thanks for this input @uptools! In the project we provide at the repository for the pluto, the FPGA can store up to 4ms worth of data for a 6 PRB system (as specified in the related TCL script, the DAC timestamping block uses 4 buffers that can hold up to 2000 samples each - see below):

set dac_fifo_timestamp_e_0 [ create_bd_cell -type ip -vlnv softwareradiosystems.com:user:dac_fifo_timestamp_enabler:1.0 dac_fifo_timestamp_e_0 ]
  set_property -dict [ list \
   CONFIG.PARAM_BUFFER_LENGTH {4} \
...
   CONFIG.PARAM_MAX_DMA_PACKET_LENGTH {2000} \
 ] $dac_fifo_timestamp_e_0

In this sense, 2.6 ms should be OK and is within the 4 ms required by LTE (that is, if the CPU can keep the pace, of course). As you point out, using the Zynq as a fully embedded processor (i.e., software in the ARM, timestamp in the FPGA) solves many of the bottlenecks, while it's true that newer/bigger Zynqs can host a much more demanding system. This being said, part of the optimizations, in any case, is always to fine-tune the right buffer sizes (at both FPGA and CPU ends), as well as adjust the frequency and timing (advance) offsets.

@LunaWuna
Copy link
Author

LunaWuna commented Feb 7, 2023

I've tried compiling the bitstream with the 5000 sample buffer and also using the performance governor, however it still doesn't fix the ringbuffer error and no signal is transmitted. There also isn't a signal when using other software transmitting with the bitstream loaded but I have no idea if that's normal or not.

@LunaWuna
Copy link
Author

LunaWuna commented Feb 7, 2023

When running and checking bandwidth via an Ethernet connection, there is only RX data sent from the Pluto and almost no tx bandwidth back to the Pluto.

@uptools uptools mentioned this issue Feb 12, 2023
@pgreenland
Copy link

I'm experiencing similar problems as @LunaWuna unfortunately.

Marks for documentation, was able to get the released bitfile loaded into the pluto and drivers rebound without any issues.

Everything built fine and the txrx test for the pluto ran without issue.

I see the following output in matplotlib:

txrxtest_matplotlib

Zooming in on the first block we have something like:

txrxtest_matplotlib_first_block_zoomed

Not sure if that's what was expected? Might be helpful to add a reference screenshot to the documentation.

Not sure if it was required but I've incorporated the iio driver into srsRAN directly. After doing so I realised that that may not have been the most appropriate route. Attempted to launch eNodeB which goes well, everything looks similar to when launched with my usual SDR. Spectrum analyser connected directly to the ploto tx port indicates that there's nothing being transmitted. Also see the occasional rx ringbuffer header error but there appears to be network traffic between the PC and SDR's virtual ethernet interface the whole time (as seen in Wireshark).

Bought the pluto pretty much for this so don't have a great deal of experience with it. I'm happy that everything's working RF wise though. Was able to build and install analog's iio-oscilloscope application and configure the DDS block to transmit CW at the same frequency, observing +3dBm on spectrum analyser.

Happy to debug a little further but would appreciate any pointers, save me falling down too many rabbit holes.

@pgreenland
Copy link

Attempting to monitor the transmission from the pluto while running the run_txrx_plutosdr.sh I may be seeing a similar issue.

I tweaked and rebuilt it, increasing the number of frames from 20 to 20000 (tweaking the hard coded nof_frames) hoping to extend the transmission time. Also commented out the printf statements in the main loop, in case they slowed things down.

When starting the app I see a small burst of output on the spectrum analyser, presumably depending on its position in the sweep. After an initial burst there's no further output.

I can reproduce the same thing with eNodeB app if I start it and stop it enough times. With the familiar LTE spectrum appearing briefly on the analyser they stopping.

Any idea what could be causing the transmission to stop after a very short period?

@pgreenland
Copy link

@ofontbach Without wanting to hijack the ticket with my ramblings, I rebuild the HDL project and had a little nose around.

Focused on the dac_fifo_timestamp_enabler block. Don't fully understand how it works however it appears to sync to the incoming metadata and IQ stream via the magic words. Exacting timestamps and storing the data in one of 4 (in the pluto's case) memories. It then performs some comparisons on the timestamps, checking if a buffer is early or late. It seems that if it's late it will be discarded.

I would infer that the high level goal is a variable sized ring buffer, using dual ported memory (as the dac operates in a different clock domain to the axi bus?) Pausing the output to the dac (thereby outputting zero) until the timestamp of the block at the head of the buffer is reached, while discarding any late samples.

Commenting out all of the timestamp range checks, keeping the default case to send the entire buffer, for example given memory0:

            -- check the time difference between the current clock count and the received timestamp (accouting for the latency resulting from the internal sample-buffering scheme)
--            if timestamp_header_value_mem0_minus_buffer_latency > current_lclk_count_int then                                                         -- the I/Q samples provided by the PS are meant to be transmitted later in time
--              fwd_early_mem0 <= '1';
--              fwd_late_mem0 <= '0';
--              fwd_time_difference_mem0 <= timestamp_header_value_mem0_DACxNclk - current_lclk_count_int - cnt_internal_buffer_latency_64b; -- @TO_BE_TESTED: check that we are not waiting forever
--              PS_DAC_data_RAMB_read_index_memory0 <= (others => '0');                                                             -- once enough 0s have been forwarded, then we'll read from the first stored sample onwards
--            elsif timestamp_header_value_mem0_minus_buffer_latency < current_lclk_count_int and timestamp_header_value_mem0_DACxNclk > cnt_0_64b then -- the I/Q samples provided by the PS were meant to be transmitted earlier in time
--              fwd_early_mem0 <= '0';
--              fwd_late_mem0 <= '1';
--              fwd_time_difference_mem0 <= baseline_late_time_difference_mem0 + cnt_internal_buffer_latency_64b;                                                   -- @TO_BE_TESTED: check that we are not always forwarding 0s

--              -- check that if the current late is within the IQ-frame size (i.e., avoid setting a negative reading index)
--              if baseline_late_time_difference_mem0 < current_num_samples_mem0_DACxNclk then
--                PS_DAC_data_RAMB_read_index_memory0 <= baseline_late_time_difference_mem0(C_NUM_ADDRESS_BITS-1 downto 0) + cnt_internal_buffer_latency_64b(C_NUM_ADDRESS_BITS-1 downto 0); -- we'll offset the initial read address the amount of samples corresponding to the time we're late by; @TO_BE_TESTED: check that we always obtain a meaningful index value
--              else
--                PS_DAC_data_RAMB_read_index_memory0 <= current_num_samples_mem0_DACxNclk; -- nothing to read (i.e., set a value beyond the biggest actual valid read index)
--              end if;
--            else                                                                                                                                      -- FPGA and PS are perfectly aligned in time or timestamping has been disabled from SW
              fwd_early_mem0 <= '0';
              fwd_late_mem0 <= '0';
              fwd_time_difference_mem0 <= (others => '0');
              PS_DAC_data_RAMB_read_index_memory0 <= (others => '0'); -- we'll read from the first stored sample onwards
--            end if;

This gets the pluto transmitting when the eNodeB is started, all be it horribly mis-timed I suspect.

However as soon as one of the "Error reading rx ringbuffer. Invalid header" messages appears in the enb terminal the transmission stops.

Connected via a usb transport these seem to occur almost exactly 10s after enb is started each time, testing over 5 runs.

Think thats about as far as I'm going to get debug wise, my VHDL knowledge is limited, especially when it comes to DMA and AXI interconnects.

Would love to see this project working though, let me know if theres anything I can do to help.

@ofontbach
Copy link
Collaborator

Hi @pgreenland,

I'll try to answer all of your questions here.

The tx-rx example application just sends 3 bursts of data at a given time instant (timestamp) within a larger transmission time (i.e., to check that the transmitted data is aligned - time-wise - as needed within the larger transmission time). If you want to change the number of active transmissions, what you want to change is not nframes but if (nframe == 1 || nframe == 4 || nframe == 9) {.

As for why there is no transmission, I think that it has to do with the usb interface and the samples arriving too late to the FPGA (below I'll try to explain how the DAC timestamping logic works). Two things are important towards this end:

  1. The same exact solution (software and FPGA) is working well for the AntSDR (see the eNB running in the AntSDR here), which uses Ethernet instead of USB to carry the I/Qs from the host to the zynq. Hence, it does not seem a problem of the solution per se, but a bottleneck caused by the USB interfacing (as you can see in some comments above, you could try to adjust the packet sizes, priorities, etc, but probably fixing the situation requires some low-level code optimizations and fine-grain setup adjustments).
  2. Unfortunately, at the current moment, although the FPGA has the ability to flag late/underflow situations, this part is not yet integrated in our RF driver and, thus, such situations go unreported. If there was space in the FPGA you could add an ILA core to check the assertion of such flags. In any case, I've created a new issue to add reporting of lates/underflows.

Regarding the DAC timestamping block, as you said, it detects the start of each DMA packet by searching for a few known header words, then extracts the timing information (i.e., time at which the first I/Q sample provided in that DMA packet needs to hit the DAC) and runs a control FSM that aligns the transmission by:

  • When data arrives early to the FPGA (desired situation) the FSM makes sure to read it at the appropriate moment so that it arrives to the DAC when required. For instance, a packet of 1000 samples arrives at time 500 with a timestamp of 600: the FSM will read the data after 100 clock cycles (actually a little earlier, taking into account the internal latency of the block).
  • When data arrives late to the FPGA (what I suspect is happening) the FSM discards the number of samples that are late and starts reading at the Nth sample in the packet in an attempt to re-align the transmission. For instance, a packet of 1000 samples arrives at time 500 with a timestamp of 400: the FSM will start reading at the 101st sample (i.e., discards the first 100).
  • In case data arrives at the precise moment or timestamping is disabled, the FSM simply forwards the samples to the DAC.

As for the internal buffering, currently, it is sized taking into account the LTE requirements (acknowledgment of received data in 4ms) for a 6 PRB cell (i.e., 1.92 MSPS), while assuming DMA packets containing a whole subframe (i.e., 1ms of data of 1920 samples). That is why by default it uses 4 buffers of 2000 samples. This also implies that there is a maximum time in advance that the solution supports. If you were to use shorter packets (e.g., 0.5 ms or 960 samples), then you'd need to adjust the buffering to 8 buffers of 1000 samples (as well as adjusting the RF driver side as well).

Regarding the clocking, both DAC and ADC chains ultimately use the same source (sampling) clock, which for 6 PRBs is of 1.92 MHz. Yet, what you see is that DMA transactions happen at the AXI clock, with a much higher frequency (e.g., 100 MHz), and the usage of this clock is kept internally in our blocks, except for those parts directly interfacing with the ADC/DAC. The reason behind this is to artificially create time gaps, which enable the insertion of the headers at the ADC side and the time alignment (e.g., management of the late situations) at the DAC side.

Of course, commenting out the FSM and forcing the data to be read as soon as it is received will result in a misaligned transmission (the test results that you show above, also seem to point to a situation where the samples are getting way too late to the FPGA and, thus, are discarded).

On our end, we'll try to work on the issues and do some more testing, but we have very limited time for this and, hence, all your inputs are more than welcome. Hope to have clarified a few things for you.

Regards

@uptools
Copy link

uptools commented Mar 2, 2023

Hi @ofontbach. Thank you very much for your explanation.

Regarding the original Pluto (using USB2.0) we measured with the regular IIO that it cannot reach 1.92msps unless we use iio_buffers larger than 5000 samples (20000 bytes). We measured a similar limitation on Pluto+ even when connecting using its ethernet interface.

Currently srsenb is using 1920 buffers when calling rf_iio_imp interface, that is 1millisecond. So I suspect regular IIO/USB2.0 will not be able to make it with that buffer sizes.

If we changed to 5760samples (3 millisecs) in TX and RX, that would enable IIO/USB2.0 to reach the desired rate, but will fail to comply with the LTE required ack of received data in 4ms. Anyhow it would be very useful to check if it still works OK, because a 4millisecs extension is being considered in some long range scenarios by 3gpp forum.

Could you hint us how we can extend this 4millisecs limit in both the srsue and the srsenb, and also extend the buffer size used in the srs-rf_iio_imp interface, in order that we can test them?

We have tried to, but checked that the srs buffers sizes are closely related to resource grid size and timing, so we found no easy way of extending them.

@pgreenland
Copy link

@ofontbach Thanks for such a detailed reply, that really helped fill in some gaps.

My goal is to run the pluto as a self contained (all be it tiny) enodeb, initially for iot connected devices on my desk but later potentially in our hil rigs to save on RF plumbing and network costs.

I was attempting to take a step by step approach, starting on the host with networked iio before cross-compiling. With the antsdr confidence I'll cross-compile and see how things go running directly on the unit itself.

I found the late / underflow flagging useful with a LimeSDR connected to an underpowered desktop in the past, wouldn't say no to some reporting from the driver in the future if you find the time.

@uptools if you get the enodeb working over the usb / virtual ethernet links drop the buffer size / length values you found work here and maybe where to put them. I attempted to tweak them I thought in both the rf driver and FPGA but ended up just crashing enodeb so likely didn't quite get my numbers right.

I'l report back with my experience of running on the unit, if the battle of the cross-compile is won :-)

@xavierarteaga
Copy link

Hi @uptools,

Could you hint us how we can extend this 4millisecs limit in both the srsue and the srsenb, and also extend the buffer size used in the srs-rf_iio_imp interface, in order that we can test them?

You can tweak the following macros:

#define FDD_HARQ_DELAY_DL_MS 4
#define FDD_HARQ_DELAY_UL_MS 4
#define MSG3_DELAY_MS 2 // Delay added to FDD_HARQ_DELAY_DL_MS

Unfortunately, the modification of these parameters has unknown implications and it is not sufficient for a correct operation of the srsenb and srsue.

@pgreenland
Copy link

Cross-compiling enodeb and epc for the pluto went well although had to tweak a few buffer sizes to fit within the 512MB of RAM.

Running directly on the pluto the buffer header errors are gone and the network is now visible to my COTS modem.

Unfortunately it looks like the plutos 7010 even with both cores active doesn't have the performance to run the enodeb itself.

System is showing a load average of 7, with enodeb's threads consuming the entirely of both processors.

Any time the UE attempts to connect something similar to the following is seen:

RACH:  tti=881, cc=0, pci=1, preamble=32, offset=34, temp_crnti=0x98
SCHED: Could not transmit RAR within the window (RA=881, Window=[884, 894), RAR=1687
Disconnecting rnti=0x98.
RACH:  tti=941, cc=0, pci=1, preamble=41, offset=34, temp_crnti=0x99
SCHED: Could not transmit RAR within the window (RA=941, Window=[944, 954), RAR=1937
Disconnecting rnti=0x99.
RACH:  tti=1261, cc=0, pci=1, preamble=50, offset=34, temp_crnti=0x9a
SCHED: Could not transmit RAR within the window (RA=1261, Window=[1264, 1274), RAR=3257
RACH:  tti=1341, cc=0, pci=1, preamble=46, offset=34, temp_crnti=0x9b
SCHED: Could not transmit RAR within the window (RA=1341, Window=[1344, 1354), RAR=3270
SCHED: Could not transmit RAR within the window (RA=1601, Window=[1604, 1614), RAR=3283
RACH:  tti=1601, cc=0, pci=1, preamble=45, offset=34, temp_crnti=0x9c
RACH:  tti=1681, cc=0, pci=1, preamble=32, offset=34, temp_crnti=0x9d
SCHED: Could not transmit RAR within the window (RA=1681, Window=[1684, 1694), RAR=3309
RACH:  tti=1861, cc=0, pci=1, preamble=36, offset=34, temp_crnti=0x9e
SCHED: Could not transmit RAR within the window (RA=1861, Window=[1864, 1874), RAR=3347
RACH:  tti=1981, cc=0, pci=1, preamble=28, offset=34, temp_crnti=0x9f
SCHED: Could not transmit RAR within the window (RA=1981, Window=[1984, 1994), RAR=3403
RACH:  tti=2221, cc=0, pci=1, preamble=0, offset=34, temp_crnti=0xa0
SCHED: Could not transmit RAR within the window (RA=2221, Window=[2224, 2234), RAR=3487
Disconnecting rnti=0x9a.
Disconnecting rnti=0x9b.
Disconnecting rnti=0x9c.
Disconnecting rnti=0x9d.
Disconnecting rnti=0x9e.
Disconnecting rnti=0x9f.
Disconnecting rnti=0xa0.
RACH:  tti=2461, cc=0, pci=1, preamble=49, offset=34, temp_crnti=0xa1
SCHED: Could not transmit RAR within the window (RA=2461, Window=[2464, 2474), RAR=4200
RACH:  tti=2581, cc=0, pci=1, preamble=16, offset=34, temp_crnti=0xa2
SCHED: Could not transmit RAR within the window (RA=2581, Window=[2584, 2594), RAR=4531
RACH:  tti=2701, cc=0, pci=1, preamble=28, offset=34, temp_crnti=0xa3
SCHED: Could not transmit RAR within the window (RA=2701, Window=[2704, 2714), RAR=4572
There are 880/1024 buffers in shared block container. This thread contains 14 in its local cache
Disconnecting rnti=0xa1.
Disconnecting rnti=0xa2.
Disconnecting rnti=0xa3.

Looks like resolving the remote IIO bottlenecks may be the best / only solution to using srsRAN with the pluto.

@ofontbach
Copy link
Collaborator

Thanks for the input @pgreenland. Yes, we knew already that without any kind of FPGA acceleration in the PHY (e.g., even if just the FFTs), unfortunately, the pluto can't handle neither the eNB nor the UE as a fully embedded application.

@pgreenland
Copy link

@ofontbach Thanks for the reply. Based on your earlier comment on acceleration in the FGPA does it feel like I'm heading down a dead end street here? i.e. would it be worth my trying to integrate Xilinx FFT IP and attempt to integrate it. Or is it simply there case that there wont be enough resources in the pluto's zynq to host the enodeb based on your experience with the bigger parts?

@uptools
Copy link

uptools commented Mar 6, 2023

thanks @xavierarteaga !

We are going to test these!

Hi @uptools,

Could you hint us how we can extend this 4millisecs limit in both the srsue and the srsenb, and also extend the buffer size used in the srs-rf_iio_imp interface, in order that we can test them?

You can tweak the following macros:

#define FDD_HARQ_DELAY_DL_MS 4
#define FDD_HARQ_DELAY_UL_MS 4
#define MSG3_DELAY_MS 2 // Delay added to FDD_HARQ_DELAY_DL_MS

Unfortunately, the modification of these parameters has unknown implications and it is not sufficient for a correct operation of the srsenb and srsue.

@LunaWuna
Copy link
Author

Has anyone managed to play around and get a working configuration yet? If so please send what you have done as I still haven't managed to get one.

@pgreenland
Copy link

Without wishing to hijack the thread.

I had a go creating my own timestamping solution specifically for the pluto. If you're set on getting the pluto hosting an srsran powered LTE network at either 6 or 15 PRB's checkout my blog post:

Private LTE with Analog ADALM-PLUTO

I've taken a slightly different approach to the srsran team. I've adapted the SoapySDR driver for the Pluto, so it works with the mainline srsRAN release. I've also included full builds of the Pluto's firmware to get the new FPGA image up and running easily. Along with a few other tweaks required to get the streaming performance upto the level required for LTE.

Leave me a comment if you give it a go :-)

@LunaWuna
Copy link
Author

LunaWuna commented Jun 6, 2023

I'll definitely try it out soon when I next can! I've been trying to run an eNB on the Pluto with no luck on a while.

@LunaWuna
Copy link
Author

LunaWuna commented Jun 6, 2023

I can't anywhere else I can ask so I'll ask it here.

I've done everything but it ends up saying time-stamping enabled but no timestamp provided. Running Ubuntu 23.04. the firmware is definitely installed because it names itself as PlutoSDR with timestamping support.

@ofontbach
Copy link
Collaborator

Without wishing to hijack the thread.

I had a go creating my own timestamping solution specifically for the pluto. If you're set on getting the pluto hosting an srsran powered LTE network at either 6 or 15 PRB's checkout my blog post:

Private LTE with Analog ADALM-PLUTO

I've taken a slightly different approach to the srsran team. I've adapted the SoapySDR driver for the Pluto, so it works with the mainline srsRAN release. I've also included full builds of the Pluto's firmware to get the new FPGA image up and running easily. Along with a few other tweaks required to get the streaming performance upto the level required for LTE.

Leave me a comment if you give it a go :-)

Hi @pgreenland,

Thanks for letting us know about your custom implementation. It sure looks quite promising and it's good that you came up with a solution specifically tailored for the ADALM-PLUTO that can enable the use of srsRAN 4G in it. As you said yourself in your blog post, our solution is meant to be used in a wider range of Zynq-based systems and, unfortunately, seems a little too heavy for the ADALM-PLUTO. I'm sure that with a little effort, it can be improved... we are just missing the time for it! So your solution is really welcome :)

Cheers!

@pgreenland
Copy link

I can't anywhere else I can ask so I'll ask it here.

I've done everything but it ends up saying time-stamping enabled but no timestamp provided. Running Ubuntu 23.04. the firmware is definitely installed because it names itself as PlutoSDR with timestamping support.

Hi @LunaWuna,

To save confusing issues across repos - I've enabled issues on my fork of the pluto firmware

Sound like you're 95% of the way there, I've got a rough idea what's going wrong.

Let's continue our conversation there to save confusing anyone working with srsran's solution.

Thanks,

Phil

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants