News:

Masm32 SDK description, downloads and other helpful links
Message to All Guests
NB: Posting URL's See here: Posted URL Change

Main Menu

General computer requirements

Started by SoraK05, June 12, 2014, 01:12:30 AM

Previous topic - Next topic

SoraK05

Hello :)
As someone who is somewhat interested in computers and their makeup, I have deduced that a computer in general can be referred to as an automated calculator.
With a hardwired board, feeding it an input of signals (numbers) can determine what to do in order to perform math (arithmetic / logic) with hardware to resemble memory buffer, equals and so forth.

An image file like a BMP is a number of data which can be read and streamed for its numbers (of colouring) directly to the monitor's grid, using the computer components to accept these instructions and relay these numbers to the monitor for example. The same goes for audio, like a wav, where its number data is sent to a speaker with some instructions.


In general, I am looking to determine the exact minimum number of opcodes required for a pc.
I.e, general arithmetic, comparing, jumping, interrupt (for delaying and in a sense like skipping), directing signal to a specific device and so forth.

What are the minimum required opcodes and registers to be able to have a functional computer with its ram / nand (for some os data) and receive/send (processed number) data to components?
I.e. if these were the only (minimal) opcodes and registers available (and some RAM/NAND), it could allow for a fully functioning PC and be able to work with devices attached (providing the device has its own chip for example) and even emulate other systems <where clock speed is generally not a concern>.

Zen

SORAK 05,   
Interesting.
As MASM assembly language programmers, we are by definition, Windows programmers,...and, based on my experience, Windows operating systems have evolved into incredibly complex executable environments.   
I think the current configuration is the very antithesis of simple,...as it attempts to be all things to all users. 
...The average modern computer user has only a vague recognition of the total functionality that is provided as software,...and, consequently, uses only a small fraction of available software components,...
I don't know that your question is relevant to all that,...but, there clearly is a minimal functional set of integrated hardware/software capabilities, that would perform as a "fully functioning PC",...

But,...why ???

SoraK05

I have thought of the notion of a 'step-by-step asm efficient' type os on hardware to suit.
I have also thought of an accompanying compression scheme as part of the os (as mentioned in another thread).

The idea is to have the minimal steps required for a functional os, and a low+high level language custom written to suit minimal step code.
It is also in general to have minimal opcodes.

Windows is currently near 20GB which is extremely excessive including support of devices, with a requirement of 1.5GB RAM.
In general, an OS can have very little code supporting RAM/NAND and for receiving/sending data to specific devices in their own step-by-step appropriate manner.

I see no need of something like a GPU when a CPU can calculate the graphics numbers (and with a decent compression scheme all pixel info can be lossless), and same for sound.

An antivirus could be embedded in the OS as a 'consentual writing' type function with some automatic privileges where applicable.

An OS in general with scheduling to allow for concurrent cycles for attached device requirements to send/receive, as well as custom scheduling distribution with a general executable environment should be very small and not use much memory.
It should also be able to manually disable devices (such as disable video and its numbers processing temporarily when requested, if something is encoding for example), have the option to essentially disable the OS to allow a specific executable to have full hardware control until it quits (like how DOS has full control) and more.


Modern OS' and modern programming is very bloated :)
I figure a hardware with minimal opcodes and a decent step-by-step ground up OS and language, where the opcodes are embedded into the hardware and not on a chip (which is fine since these should be able to emulate other devices) and with shorter traces etc on a relatively smaller board with insulation may clock to higher frequencies to compensate. It can also have broader and generally more universal programming in the executables that may be longer but can have a good os step-by-step compression tool to accompany it.

One main gaming engine can be derived to do main calculations for an environment and support of lossless images, and other minor programming to make it playable. This includes 3D and any geometry for calculating the range of an image to display as well as any effects to alter colour in an area.
A powerful os compression tool should allow for lossless image data to be extracted quickly and accessed quickly for faster rendering of the final data to output.


The clock can be variable and be adjustable and suited to specific requirements of executables which can have an estimated calculation during compilation/a scan of its general expected cycles based on its code structure and any specific halts to maintain a specific expectation in time.
This can reduce clock expectations for general software to much lower numbers and also have a lower voltage.
This same device can be attached to a battery and have in its i/o port a video screen which is portable.


I figure the requirement would be the main motherboard with integrated opcode layout, RAM and NAND, and a main I/O port where each input has its own chip to be recognized for its device slot for the OS and any other specific code requirement for sending/receiving info - the monitor will have its own data in its port to determine what to do with any video numbers sent, and same for speakers etc.


The point is that a step-by-step type os and language, as well as tools and hardware to suit should perform much more optimally using generally lower requirements.
With its smaller integrated design and insulation, with fan etc it should also be able to compensate for overclocking to support emulation for other architecture in step-by-step programming - I have seen emulators where using asm its performance is much higher than that of another compiler which can bloat code to emulate the same hardware.


As it is, with excessive memory and transistors etc for general usage in modern computers, these computers will not have a realistic expectation to output 4K in 60fps in a 3D game for example in the next few years.
In general, modern software can already use less requirements of clock/voltage including battery for something portable.


For compression, 7z for example is c++ and doesn't compress too effectively (in the thread I did a file of mostly repetition can go down to ~50B and 7z has 1KB, RAR 2KB and PNG 5KB). I figure using minimal step-by-step asm and perhaps a tweak for time/space and still reduce the file more is doable for the os tool in decent time.


I am interested in a notion of a minimal step-by-step os and language, with software to suit and using minimal opcodes on suitable hardware.
One can have much more control of hardware and devices, such as specific control to read data cards as they are and so forth without any os layers, and to raw levels beyond general linux for example.
Such control can allow for programming specific devices suitable for a computer seamless and with step-by-step accuracy.


I figure a computer is mainly math, with arithmetic and logic, and other info to compare/jump/load/store/halt etc.
Where the opcodes are generally the least, and with an os compression tool to compliment, the integrated layout on the board can be effective for having a decent clock.
I figure as it is math, it would be main ADD/SUB and in general data altering such as specific commands to alter / manipulate bits in a range like 8 bits (like flipping a specific offset, shifting bits etc). The rest is logic, movement and pausing.

Zen

SORAK 05, 
You know,...it took me awhile just to read the above post,...not sure I really understand all the implications,...
I had a notion awhile back,...that it might make alot of sense to have numerous subset Operating Systems that could be loaded on demand,...
For instance,...I think it definitely would be a good idea (security-wise) to have a unique operating system for just browsing the Internet,...since, all of the malware comes from that vector.   

Have you ever had a look at the Microsoft Windows Driver Development Kit (DDK) ???
...Or, say,...OSR Online, Everything Windows Driver Development ???

hutch--

There is a simple if tedious way to answer the original question, download the Intel manuals for a recent x86 processor and start reading. You can easily get the count of available opcodes and then all you have to do is apply the range of opcodes to the number of tasks you need to perform and you will have answered your own question.

jj2007

Excellent idea, Hutch :t

Afterwards, OP could give us a succinct (<1 page) synthesis of his findings.

dedndave

i recall, some years back, they were talking about RISC CPU's with something like 16 instructions

maybe it was 32
anyways, any CPU that doesn't have MUL and DIV microcoded is a step backwards, in my opinion

FORTRANS

Quote from: SoraK05 on June 12, 2014, 01:12:30 AM
In general, I am looking to determine the exact minimum number of opcodes required for a pc.

Hello,

   Well, for a "modern" PC, you want a fairly complex CPU.  In
theory a computer could have as few as one or two opcodes/
instructions to work with and no CPU registers by working
directly in general memory.  Look up Turing Machines and Busy
Beaver Programs.

   _Briefly_ looking at "An Introduction to Microcomputers",
Volume 3, "Some real world products", June 1997 revision,
gives some idea of 4-, 8-, and 16-bit CPU's of that era.  Erm,
maybe 60 to over 200 opcodes as a quick glance.  The
TMS9900 has the fewest CPU registers with only a program
counter.  Though it implements more 16 "registers" in memory.
The 8085 had ten registers.  The F8 had 64 scratchpad registers
in addition to its other working registers.

Regards,

Steve N.

Tedd

All operations CAN be done with a single instruction and no registers (operations work directly on memory, or a stack). And since there is only a single instruction, all instructions are the same one with different parameters, and so there are effectively no instructions and only parameters.

But it's clearly neither efficient, nor sensible.
Potato2

SoraK05

#9
I figure there are these general elements:

Write a bit (0 or 1) - This is to force writing a 0 or a 1 to a source
Inverse a bit (FLIP IT or NOT) - This is to be used for manipulating existing bits
Determination - This is to identify a specific offset/stream location
Movement - This is for having a flow of receiving/sending data
Basic comparing - This is to be able to be used as a means to perform or not


To fill 256 opcodes which is a general standard (8 bit clusters of data), the above can be done and with batching to pad, having any following operand act as a multiplier :)
While this wouldn't necessarily allow for much consistency with a clock for example, the clock can be adjusted to be more proportional based on the instruction. The design can also be integrated than its own chip, generating less heat from encapsulating metal, allowing for more performance / higher clock. The above should generally cater for computer possibilities and with optimizing steps and batch capability in an instruction.


All math and general functions can be preprogrammed in an instruction database and used by a program to calculate on-the-fly instructions beyond preprogrammed and compiled versions of the above where a real time calculation is required.


All basic math can be preprogrammed and read, in an intelligent OS and filesystem which can quickly access individual bits (to write/invert) and also allow for multiple offset recognition for locations on a storage which are 'taken' as well as have data be in multiple places.
This can allow for quick erasing of specific areas by associating those areas of the storage as not part of the file and as 'free', as well as the opposite of including a written string in a 'free' location and associating it to be recognized as part of a location in a file.

Defragmenting can be done from time to time to reshuffle data of files so that once done, every file effectively is in a single stream and rewritten this way, for the main file entry to be one main offset than multiple locations, reducing steps and storage to read the stream of the file, while also allowing for all used data to be on one side and the free data on another (all default 0 bits e.g.)


It can allow for quick shifting of bits (multiplying/dividing of an even number) by adding 0 bits (or 1s) or cutting off bits from the file's identified stream and
locations (also acknowledging the first 1 bit and suiting it to fit into 8 bits if it is treated as a number rather than a stream).

Odd multiplication and division as well as adding/subtracting can make use of the above and inversing appropriately.

dedndave

here's something you might find interesting
it's essentially a 1-bit microprocessor - motorola called it an "industrial control unit"
but, it was mostly sold to hobbyists that wanted to fiddle   :biggrin:

it has 16 instructions

https://www.brouhaha.com/~eric/retrocomputing/motorola/mc14500b/mc14500brev3.pdf