Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add L1 Instruction Cache #152

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

danbone
Copy link
Contributor

@danbone danbone commented Feb 8, 2024

Adding an L1 Instruction Cache for fetch as noted in #143

  • Updated fetch unit so that it requests a block of instructions per cycle from the instruction cache. The block is formed based on the instruction's program counters.
  • Tidied up some coding style within the L2Cache/MSS.
  • Added tests for new instruction cache unit.
  • Changed MSS credit interfaces to be more like those used within the core.

klingaard
klingaard previously approved these changes Feb 12, 2024
Copy link
Collaborator

@klingaard klingaard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Appreciate your changing code to match the Code Style guide.

@danbone
Copy link
Contributor Author

danbone commented Feb 16, 2024

I haven't managed to make much progress this week, rather than leave the PR to rot. I think I'd merge this and update #143 with the enhancements I'd hope to do, and open a new PR. Probably makes code review easier anyways.

@danbone danbone marked this pull request as ready for review February 16, 2024 15:58
@arupc-vmicro arupc-vmicro self-requested a review February 26, 2024 03:41
@klingaard klingaard dismissed their stale review February 26, 2024 17:15

More changes were added. Need to look them over a bit.

Comment on lines +127 to +135
std::deque<InstPtr> ibuf_;

// Size of trace buffer (must be sized >= L1ICache bandwidth / 2B)
const uint32_t ibuf_capacity_;

// Fetch buffer: Holds a queue of instructions that are either
// waiting for an ICache hit response, or they're ready to be
// send to decode
std::deque<InstPtr> fetch_buffer_;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using a sparta::Queue or sparta::Buffer here to get free pipeline collection, capacity checks, and utilization stats.

{
const auto & inst_ptr = inst_generator_->getNextInst(my_clk_);
if (SPARTA_EXPECT_TRUE(nullptr != inst_ptr)) {
ibuf_.emplace_back(inst_ptr);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wonder if you can append to the ibuf_ as blocks instead of individual instructions...

@arupc-vmicro
Copy link
Collaborator

@danbone are you planning to add a documentation describing uarch of the modeled L1 Icache and changes to the fetch unit as a part of this PR?

"cpu.core*.biu.ports.out_biu_ack",
"cpu.core*.l2cache.ports.in_biu_l2cache_ack"
"cpu.core*.biu.ports.out_biu_credits",
"cpu.core*.l2cache.ports.in_biu_l2cache_credits"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the planned documentation, can you please include a list of input and output ports of the Icache and what do they represent. It would be helpful to understand this part of code changes.

// Poll on dcache_l2cache_credits_ > 0 which means
// DCache is blocking for now. Busy is set on miss, until the miss is
// resolved by the L2.
// For NB behaviour: Poll on dcache_l2cache_credits_ > 0 which means
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you would like to use NB as an acroynym, can you please define it somewhere first.

If this comment relates to a future change rather the current behavior, please add a TODO: before the comment, to make it explicit.

num_insts_to_fetch_(p->num_to_fetch),
skip_nonuser_mode_(p->skip_nonuser_mode),
my_clk_(getClock())
icache_block_shift_(sparta::utils::floor_log2(p->block_width.getValue())),
ibuf_capacity_(std::ceil(p->block_width / 2)), // buffer up instructions read from trace
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why we need block_width / 2 as ibuf capacity. Can you please add a comment as an explanation?

// Capture when the simulation is stopped prematurely by the ROB i.e. hitting retire limit
node->getParent()->registerForNotification<bool, Fetch, &Fetch::onROBTerminate_>(
this, "rob_stopped_notif_channel", false /* ROB maybe not be constructed yet */);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please clarify why this is needed?

void Fetch::fetchInstruction_()
{
const uint32_t upper = std::min(credits_inst_queue_, num_insts_to_fetch_);
// Prefill the ibuf with some instructions read from the tracefile
// keeping enough capacity to group them into cache block accesses.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific reason why we need to prefill instructions from the trace? Can we not read the trace after we get the response back from Icache? When we have decoupled frontend, isn't that what we would expect to do?

@arupc-vmicro arupc-vmicro self-requested a review March 4, 2024 08:53
Comment on lines +67 to +69
void getRequestFromFetch_(const MemoryAccessInfoPtr &);
void getCreditsFromL2Cache_(const uint32_t &);
void getRespFromL2Cache_(const MemoryAccessInfoPtr &);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why callbacks? Why are in ports below not enough?

@jeffnye-gh
Copy link
Collaborator

What is the status of this PR? I see it now has conflicts, if those were fixed is this ready to go?

I have a need in BPU evals.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants