You decompose a problem into smaller problems and somehow try to find a solution by again combining it.
If given a problem of size "n" divide it into number of "a" subproblems of size "n/b"Where a >= 1 and b > 1 "b" should be strictly greater than one so that notion of dividing problem into smaller problems is fulfilled. Solve subproblems recursively. Idea behind is that when you divide problems into subproblems it almost becomes constant in size and becomes easy to solve. In last stage you combine all problems and get solution.
An example of merge sort using divide and conquer
The complexity of this would be like below
T(n) = aT(n/b) + [Work required for merge]
It helps in solving difficult problems such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height n to move a tower of height n-1.
Divide-and-conquer algorithms are naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance because distinct sub-problems can be executed on different processors.
Divide-and-conquer algorithms naturally tend to make efficient use of memory caches. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory.