Processor technology is still dramatically advancing and promises enormous improvements in processing data for the next decade. These improvements are driven by parallelisation and specialisation of resources, and `unconventional hardware` like GPUs or the Cell processor can be seen as forerunners of this development. At the same time, much smaller advances are expected in moving data; this means that the efficiency of many simulation tools -- particularly based on Finite Elements which often lead to huge, but very sparse linear systems -- is restricted by the cost of memory access. We explain our approach to combine efficient data structures and multigrid solver concepts, and discuss the influence of processor technology on numerical and algorithmic developments. Concepts of `hardware-oriented numerics` are described and their numerical and computational characteristics is examined based on implementations in FEAST, a high performance solver toolbox for Finite Elements which is able to exploit unconventional hardware components as `FEM co-processors`, on sequential as well as on massively parallel computers. Finally, we demonstrate prototypically how these algorithmic and computational concepts can be applied to solid mechanics problems, and we present simulations on heterogeneous parallel computers with more than one billion unknowns.