Results 1 to 4 of 4

Thread: von Neumann Architecture

  1. #1
    Join Date
    Jun 2012
    Posts
    122
    Mentioned
    0 Post(s)
    Quoted
    40 Post(s)

    Default von Neumann Architecture

    Why is it useful that the von Neumann Architecture treats programs the same as data? Any dangers to this?

    I am thinking for the danger we can write a program that could modify itself.

  2. #2
    Join Date
    Jan 2012
    Posts
    1,104
    Mentioned
    18 Post(s)
    Quoted
    211 Post(s)

    Default

    I think the goal was to store the instructions in the memory too. Instructions werent stored anywhere before Neumann architecture, they were implemented by wires. Storing them in the same memory as data's is just easier to handle, while other architectures have two separate memories, like Harvard.
    Making a self-modifier code is sometimes useful but yes its dangerous.

  3. #3
    Join Date
    Jan 2011
    Location
    Denver, CO
    Posts
    1,351
    Mentioned
    2 Post(s)
    Quoted
    72 Post(s)

    Default

    Just look at Common Lisp:

    http://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
    Lisp syntax is barely abstract; Lisp programs are constructed as trees of S-expressions, which are equivalent to the abstract syntax trees that compilers of other languages create internally. As a homoiconic language, Lisp also makes no distinction between code and data; Lisp programs are themselves just Lisp data structures. Lisp programs can thus generate additional Lisp programs themselves via macros. As a result, the programmer can construct complex domain specific languages with relative ease.

  4. #4
    Join Date
    Dec 2006
    Location
    Banville
    Posts
    3,914
    Mentioned
    12 Post(s)
    Quoted
    98 Post(s)

    Default

    The other option would be Harvard architecture. This is commonly seen in microcontrollers, where the program is resident to on-chip flash and is read directly from the flash and executed (no intermediate copy to RAM).

    Even though this thread is long dead, I will explain why they are both still useful. To do so, we're gonna need to talk about more embedded systems, because modern computers aren't a great example. This is because modern computers are actually a combination of both types, taking the best from each world. Programs are copied and loaded the same way as data, but then put in a separate memory block for performance reasons (instruction pipelining, prefetching).

    On microcontrollers, however, you need a compact way to store the program relatively permanently. Older chips used ROM, then EPROM, EEPROM, and then flash. The choice of Harvard architecture here is not explicit, it just happens due to the choice of how to implement program storage in a cost-effective manner. In earlier chips, the ROM/EPROM/EEPROM was not rewritable, but modern microcontrollers can generally modify their own nonvolatile memory (new FRAM technology allows chips to use program storage as RAM). This technique ends up being pretty common for implementing lookup tables and saving configuration settings. There is a strength, however, in that the size of the memory need not be a multiple of two. This allows the core of the processor to be optimized for certain tasks and reduces the size of the compiled programs. PIC microcontrollers, by Microchip, have a flash word size of 12 bits - that's why some of them have flash memory of 14kB/75kB or something strange (they also only have one register, which tends to make you want to kill yourself).

    As a more complex and somewhat opposite example, high-throughput signal processing chips usually have special instructions and multiple data busses (one RAM, but you can read or write from 3-4 locations at once) to be able to operate on lots of data at a time. The multiple bus structure also allows them to have a unique instruction layout. One instruction might encode an add, multiply, and subtraction on one or more registers all at the same time.

    And what do they all have in common? No one really started out by saying "we need to use Harvard architecture!" It just kind of happened. Von Neumann architecture computers are generally more intuitive (at least now). You don't necessarily need it for any specific reason; any program relying on rewriting itself could be rewritten to interpret data determining its next action (and in effect emulating a Von Neumann machine).
    Last edited by R0b0t1; 03-10-2013 at 07:21 AM.
    The jealous temper of mankind, ever more disposed to censure than
    to praise the work of others, has constantly made the pursuit of new
    methods and systems no less perilous than the search after unknown
    lands and seas.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •