A Kernel method for probabilistic seismic hazard computation, without definition of source zones, has been compared with the classical Cornell-McGuire method. The Kernel method used here amalgamates statistical consistency with an empirical knowledge basis (earthquake catalogue), and incorporates parameters describing the structured character of the earthquake distribution. Statistical Kernel techniques are used to compute probability density functions for the size and location of future events, and linear trends based on geological or seismic information, and uncertainties in magnitude and epicentral location for each earthquake are incorporated in the statistical estimates. The Kernel method has been explored in terms of the available parameterisation, particularly aimed at understanding how tectonic knowledge of a region and expert judgement can be used in reasonable and statistically significant ways. The two computation methods were compared with synthetic data and with seismicity catalogues (from Norway and Spain). When using real data it was found that the Kernel method generally yields lower hazard results than the Cornell-McGuire approach. More specifically, it is found that the difference between the two methods increases with increasing deviation in the catalogue from the self-similarity assumption implied by the Gutenberg-Richter relationship. The Kernel method has features that circumvent some of the simplification drawbacks of the conventional zoning methods, and it has potentials to develop into a feasible alternative for hazard computation.