What is exact and approximation algorithm, Computer Engineering

What is Exact and Approximation algorithm?

The principal decision to choose solving the problem exactly is called exact algorithm. The   principal decision to choose solving the problem approximately is known as Approximation algorithm.  

 

Posted Date: 7/27/2013 5:01:26 AM | Location : United States







Related Discussions:- What is exact and approximation algorithm, Assignment Help, Ask Question on What is exact and approximation algorithm, Get Answer, Expert's Help, What is exact and approximation algorithm Discussions

Write discussion on What is exact and approximation algorithm
Your posts are moderated
Related Questions
Q. Show the Bus and Memory Transfers? A digital computer has many registers and rather than connecting wires amid all registers to transfer information between them a common bu

What is the benefit of Report Wizard over an Auto Report? It takes a little more work to make a report with the report wizard than with the Auto Report but you have a lot more

Problem: a) Authoring tools consist of two basic features. First, an authoring facility for creating and editing, and second, a presentation vehicle for delivery. The authorin

Why is the data bus in most microprocessors bidirectional while the address bus is unidirectional?  Data Bus:  These lines are used to send data to memory by output ports and

DOM is the Document Object Model (DOM) which is an interface specification handled by the W3C DOM Workgroup that explains an application independent mechanism to access, parse, or

Define miss penalty? The extra time required to bring the desired information into the cache is known as miss penalty.

How will you form an 8 bit adder using 2 four bit adder IC's 7483? Ans: 4 bit adder IC is IC 7483. This has two four bit data inputs and output carry, 4 bit data output carr

Which language  is a platform free language Java language

Define micro routine and microinstruction. A sequence of control words corresponding to the control sequence of a machine instruction represents the micro routine for that ins

Hardware Implementation for signed-magnitude data When multiplication  is  implemented  in  digital  computer,  we  change  process lightly. Here, in place of providing registe