Quantcast
Channel: testbench Archives Semiconductor Engineering
Viewing all articles
Browse latest Browse all 31

The Future of UVM

$
0
0

It’s time for a frank discussion on the future of UVM. Given how UVM usage has grown and the number of teams that rely on it, I think this conversation has been a long time coming.

Is continuing to use UVM the right thing to do? Do we have hard evidence that supports our continued usage of UVM? Do we actually benefit from it or do we just think we benefit?

When it was first introduced, did we accept UVM for good reason or accept it out of pure suggestion? EDA suggested it was a natural step to take. We’ve seen the success stories. The headline <Company X> Tapes-out SoC Using UVM implies success. But how often do people publish failures? How often do we see articles titled UVM Adoption Blew Our Development Schedule. When have we seen conference papers dive into the details of how an over-engineered UVM testbench led to a 2x schedule overrun?

Or maybe it wasn’t suggestion. Despite having no clear indication UVM is better than what we had, maybe we accepted UVM because we wanted to; because it was cool. Engineers love to optimize and UVM gives us all kinds of options for doing just that. We love the idea of portability (so much so that the next new thing has the word ‘portable’ right in it’s name!) and UVM offers lots of portability. UVM makes it easy to generalize job postings and rank candidates, too. And let’s not forget that UVM was so, so shiny1. There was so much to learn which was a big draw for engineers. And even though it’s software-y, the language and BCL packaging shielded us from the scarier bits of software theory.

But thinking back through the last 15 years and the evolution of functional verification that culminated in UVM, have we ever considered that UVM is where functional verification possibly went wrong? Should we be considering a future without UVM?

Or… hmmm… uhhh…

Meh.

Never mind.

Let’s scratch the whole time-for-a-frank-discussion-on-the- future-of-UVM thing. The evidence for and against is sketchy at best so there’s probably no point in discussing it. We are where we are so let’s keep thinking of UVM as the Universal foundation of functional verification. Let’s keep adding the features to UVM that produce the anecdotal evidence of it’s own success. Let’s take it beyond simulation.  Let’s keep using it to fill conference programs, filter out qualified job candidates, hone our pseudo-software skills2, sharpen the divide between design and verification, fuel the need for training and complementary tools, etc. Let’s keep doing what we’re doing! Except for one tiny difference:

Let’s make failing miserably with UVM less likely.

Just because we don’t see the failures published and celebrated doesn’t mean they don’t happen. You know they happen. They’re out there; the weekly schedule slips; the ridiculously complicated testbenchs. You’ve seen them. I know you’ve seen them because you told me you’ve seen them. Many of them! And they’ll continue to happen unless we reign in future-UVM to make them less likely.

To get us started, I’d like to propose a set of rules that applies to all future-UVM development:

Rule 1: Features actually have to work. This seems like a no-brainer but it’s a rule that’s currently being broken. Features that don’t work get fixed or removed. Phase jumping… I’m looking at you… unless of course someone has recently fixed the phase jumping.

Rule 2: All features are recommended. If a feature is not recommended, chances are it’s primary purpose is to mislead unsuspecting verification engineers. Instead of recommending no one use a feature, let’s just save people the trouble and remove it. All the phases that run in parallel with run_phase… now I’m looking at you.

Rule 3: Cap the size of the code base. Face it, at several 1000’s of lines of code and growing, size and complexity is what’s going to eventually take this house of cards down. If continuing to prop it up is a long term objective, we’ll need to cap complexity. Easiest objective way to do that is capping the size of the code base. If you want to add a line of code that takes UVM beyond the cap, you need to remove some other line of code first… which means you need to know what features people are actually using… which is another discussion… for another time.

Rule 4: New features come with a price. The price of new features is set in bug fixes. You need to pay for the feature – i.e. fix some existing bug(s) – before your new UVM feature is released. One bug fix for a function/task, five for a class and 15 for a package + 13 for every change that breaks backward compatibility.

Rule 5: All new features are regression tested. Aside from steps 1-4, this is my personal favorite. Your new feature or bug fix has to be delivered with tests that verify it works. The tests go in a regression suite that’s run with every update to the code base.

That’s it. A concise set of rules that improves future-UVM for all of us. Who knows where those steps will take future-UVM over the coming decade, but I do know it’ll make life easier for the teams still catching up on the last decade. And kudos to the people that have already started down this path with the frameworks and papers that are meant to make UVM easier. Just imagine what these people could do if it was easy to use on it’s own!

Side note… I started writing this article to go in a completely different direction. Funny how fast and far things can go off the rails once you really get moving.

 

1 Shiny and new was the promise of functional verification in the early 2000’s and it’s pulled through on that promise big-time. Admittedly, the shine of RVM and VMM is what pulled me into verification in the first place and what got me through to UVM.

2 I’m no software developer but I appreciate it when UVM makes me think I am.


Viewing all articles
Browse latest Browse all 31

Trending Articles