Production test
5.22 Once a design has been verified, it can
be manufactured, but this does not mean that every chip will be
perfect. Notwithstanding very high quality control, defects still
occur in the manufacturing process. The manufacturing yield is
the proportion of manufactured chips that function correctly.
5.23 Defective chips have no value, and it is
economically vital to identify and discard failing parts as early
in the manufacturing process as possible, to minimise the investment
that is made in them. This is the purpose of production test.
Before the chip is packaged (adding cost), the bare die is tested.
This test must be able to identify any defect that renders the
chip dysfunctional, and it must therefore be able to check, for
example: that every transistor on the chip is operating correctly;
that every required connection between components functions correctly;
and that there is no short-circuit giving an unwanted interconnection.
Today's complex integrated circuits mean that developing such
a test is a formidable challenge for the test engineer.
5.24 The equipment used to apply the test programs
to the manufactured integrated circuits is itself very expensive.
The time a chip spends on the tester adds significantly to costs.
It is therefore important that the test is not only thorough,
but also efficient. This requires the designer of the circuit
to incorporate features into the design that ensure that it is
readily testable. This is another constraint on the design process.
5.25 As the number of transistors that can be
manufactured economically on an chip has grown, the rise in complexity
of the designs has made production testing increasingly difficult
and expensive. However, the increased transistor resource allows
sophisticated test support structures to be incorporated onto
the chip at low cost, compensating at least in part for the growing
problem of test.
Innovation in design
and architecture
5.26 The major thrust in commercial processor
development is, as noted above, towards ever-faster single microprocessors
for general-purpose applications. However, there is active research
going on into alternative ways to exploit the huge transistor
resource that CMOS technology is projected to yield over the next
decade.
5.27 As noted by Professor May (p 89), on-chip
multiprocessing exploits the transistor resource to implement
several relatively simple microprocessors on a single chip[46].
Although, as noted in paragraph 5.8, there are major obstacles
to be overcome in programming such a system for existing general-purpose
applications, recent developments in computer programming languages
make this more practical for newly-developed applications.
5.28 Apart from embedded SoC designs, microprocessors
and their main memory systems are at present implemented on separate
chips. Communication between chips is much slower than communication
on the same chip, so processor-memory communication can be a significant
performance bottleneck. There is research taking place into the
benefits of implementing a high-performance microprocessor and
its main memory system on the same chip.
5.29 For very large and complex chips, chip-wide
communication and synchronisation is of growing concern, and research
is being carried out into new approaches. These include:
(a) chip area networks which replicate the concepts
of the office computer network at the level of a chip; and
(b) asynchronous design which removes the need
for every action to be synchronised across a chip.
5.30 The increasing cost benefits of high-volume
manufacture are, as noted by the British Computer Society (BCS),
in apparent conflict with the market demand for a wide range of
highly differentiated products (p 42). This leads to a requirement
for standardised chip configurations that are applicable across
a wide range of application areas, with the product differentiation
being realised by changes in software and the use of reconfigurable
hardware components. As indicated by Professor May (Q 241),
there is a major challenge for the system architect in extending
the concept of universality from the programmable processor to
a general-purpose SoC.
5.31 Taking reconfigurable hardware a step further,
another active research area is concerned with reconfigurable
computing. This further blurs the distinction between the hardware
and software in a machine by exploiting the dynamic reconfigurability
of parts of the hardware through the use of electrically programmable
logic structures. The software contains instructions not only
(and as usual) about the processing required but also about the
way the hardware should perform the processing. In principle,
this should allow the hardware to be tuned for each application
as it runs although a great deal of research is likely
to be required before the optimum way to perform such tuning is
understood.
Cognitive Systems
5.32 Alan Turing conceived in the 1940s that
the universal computer might be capable of emulating human mental
processes, but so far no machine has proved capable of passing
the Turing test[47].
However, if an ability to play chess is any indicator of intelligence,
it should be noted that the best human chess player has now been
beaten by a computer.
5.33 Researchers in the field of artificial intelligence
generally concern themselves with more modest goals than emulating
complete human mental processes. Significant progress is being
made in several areas such as the understanding of natural languages,
machine translation, expert systems (which take knowledge from
human experts and use rule-based mechanisms to incorporate that
knowledge into a computer program), machine learning, and so on.
5.34 There is increasing interest in biologically-inspired
computer architectures. This is partly as a result of rapid increases
in the detailed understanding of biological systems such as brains
(in humans and other species). It also flows from frustrations
with the rate of progress in adapting conventional computers to
tasks that people find simple, such as recognising faces or understanding
speech. Thus there is growing interest in well-established fields
of research such as artificial neural networks, and newer areas
are also developing such as genetic algorithms (modelling biological
evolution) that adapt software and hardware to a desired function
through random mutations and (imposed) survival of the fittest.
5.35 As mentioned by Dr John Taylor (Q 468),
a recently-launched DTI Foresight study programme into Cognitive
Systems[48]
recognises the mutual benefits to computer scientists and neuroscientists
of pooling their knowledge. Insights into the operation of the
brain may stimulate innovations in computer design that enable
machines to emulate human functions better. In turn, these computer
architectures may yield results that improve understanding of
the function of the brain, with obvious potential benefits to
the treatment of mental health problems. Interdisciplinary activity
in this area may revolutionise computing in due course, but these
are early days in this work.
43 35.86 million million floating point operations
per second. Back
44
www.top500.org gives details of the current top 500 supercomputers. Back
45
For a fuller discussion of these issues see Mr Ian Phillips' supplementary
memorandum on behalf of the Institution of Electrical Engineers
(p 61). Back
46
For example, picoChip - a recently-formed company based near Bath,
as mentioned in the memorandum from Pond Venture Partners (p 212)
- is developing a configurable processor for 3G mobile phone base
stations which incorporates over 400 processors on a single chip. Back
47
A computer would pass the Turing test if a person communicating
with it via a terminal was unable to determine whether the responses
came from a machine or a human operator. Back
48
See www.foresight.gov.uk Back