Pine64 comes with Star64 single-board computer that has RISC-V chip for the first time – Computer – News


RISC-V itself is not a design, but an ISA (Instruction Set Architecture).

For example, ARM provides a fairly complete package of data to build a processor. For example, the ISA will be fixed for a certain generation of ARM cores, but there are also different core designs that can be licensed that achieve different performance levels. The well-known design houses build peripherals and memory buses around it, and then the SOC is ready. Some manufacturers (Apple with M1, but also others) adjust cores even further at their own discretion.

With RISC-V, this is all much more relaxed. The instruction set has several separate extensions, and you can choose what you need. The minimum you need is to be able to work with 32-bit integers. In the instruction set there is even space left in the opcodes for you to add your own instructions. The implementation of the processor, such as the pipeline, etc. is also not established and is also provided by the industry/community.

This certainly has advantages in certain aspects. If you build your own ASIC or FPGA design, you can choose to create/license a high-performance core design that includes superscalar (processing multiple instructions at once), speculative execution, multiple ALUs and so on. You then move towards desktop PC levels performance.

But you can also optimize for size. However, in the open source community there is also a so-called “bit-serial” implementation called SERV. This CPU is extremely small in that it processes 1 bit of a 32-bit instruction per clock cycle. An instruction to add 2 numbers together therefore takes 32 cycles. Regular CPUs do that in 1 cycle, and CPUs with multiple ALUs can achieve even higher throughput.

Why would you want that? Because it’s possible. The CPU is literally several hundred flip-flops. Thus 6000 SERV cores fit on 1 large FPGA. The performance is to cry, because 1 CPU core actually needs 32 clock ticks per instruction.
However, there is still a legitimate use for it. Suppose you make an ASIC that needs a small processor to put some bits in the right place at startup, and does nothing else. Then you can do that with a few hundred flip-flops, and you can even adjust the firmware later.

A few paragraphs ago I said “in a sense”, because I’m not quite convinced yet whether this is ‘the’ approach for software developers. For hardware developers, RISC-V is fantastic because it hasn’t been molded yet. However, several microcontrollers that I’ve seen come out with RISC-V so far often have their own GCC compiler builds (needed) precisely because they add their own instructions to the processor. It’s a classic feature of OpenSource where someone who wants to improve a piece of software but then doesn’t like the collaboration or direction of the project and makes their own fork. So you have 2 software packages that do the same thing. And is this desirable for a compiler that should use the same architecture? Or do we have to live with the fact that you can only use certain ‘proprietary’ instruction set extensions if you use the manufacturer’s (possibly poorly maintained & very outdated) compiler fork?

I also dislike the fact that it is not clear what performance level you can expect from many chips. A “RISC-V core” doesn’t tell me how fast it can be, as I just outlined with a superscalar vs bit-serial implementation.

[Reactie gewijzigd door Hans1990 op 30 juli 2022 12:48]