from Historical Earthquake Felt Reports
#Istituto Nazionale di Geofisica, Roma
(Revised version, August 1998)
The systematic application of this method to all the M>5.5 earthquakes that occurred in the central and southern Apennines (Italy) in the past four centuries returned encouraging results that compare well with existing instrumental, direct geological and geodynamic evidence. The method is quite stable for different choices of the algorithm parameters and provides elongation directions which in most cases can be shown to be statistically significant. In particular, the resulting pattern of source orientations is rather homogeneous, showing a consistent Apennines-parallel trend that agrees well with the NE-SW extension style of deformation active in the central and southern portions of the Italian peninsula.
Defining a set of seismogenic sources through a standardized criterion is therefore not an easy assignment. As Reiter  puts it ñDefining and understanding seismotectonic sources is often the major part of a seismic hazard analysis and requires knowledge of the regional and local geology, seismicity and tectonicsî. Perhaps due to the difficulties in identifying individual large sources, either by recourse to geological data alone or through a combination of historical and geological evidence, most national seismic hazard plans still rely almost exclusively on catalogues of historical seismicity with very little support from field and instrumental data (e.g. Muir-Wood ). This condition is further stressed by the circumstance that in many countries the amount of available knowledge on historical seismicity is often considered sufficient to supply a satisfactory representation of the earthquake potential.
Although we certainly agree that historical catalogues supply a reasonable first-cut representation of regional seismicity, we want to use Italy as an example to demonstrate that the information contained in homogeneously collected intensity observations can be used to quantify the essential parameters of the seismic source, therefore providing a more valuable contribution to the assessment of seismic hazard than commonly assumed. In particular, we want to propose a strategy through which seismogenic sources documented solely by historical information can be described by the same set of physical parameters normally used to describe sources for which modern instrumental observations are available. The ultimate goal of this strategy is to complement and strengthen with historical information the usually limited instrumental or surface faulting evidence forming the current earthquake distribution and recurrence models.
Our approach is similar to that taken by Johnston  for quantifying the seismic moment of some large pre-instrumental earthquakes in stable continental regions, but somewhat more ambitious. In this paper we process intensity data (1) to estimate the earthquake location, expressed as the center of the distribution of damage; (2) to assess the earthquake seismic moment, expressed in terms of moment-magnitude obtained from the overall extent of the damage pattern; and (3) to make inferences about the physical dimensions and (4) orientation of the causative fault, respectively using empirical relationships and using an original algorithm described later on in this paper. The estimated parameters are then calibrated using earthquakes for which both intensity and modern instrumental data are available. Finally, each earthquake source is conceptually and graphically delineated by a rectangle representative of the fault at depth.
Our ultimate goal is to learn about as many as possible potential or proven sources of damaging earthquakes of an extended region. After a first catalogue of historical sources has been obtained, additional and often perfectly unknown seismogenic faults can be inferred from the analysis of the spatial relations between adjacent sources, or between them and other smaller (non-damaging) historical or instrumental earthquakes, or by recourse to more focused geological observations. In the process we would also test the extent of overlap between adjacent sources, that is, the regularity of the seismic release in space. Proving or disproving a stringent regularity would obviously have important implications for the assessment of regional seismic hazard.
In this paper we summarize the full approach describing in better detail the part of it that deals with the assessment of the fault orientation. We use the central and southern Apennines of Italy as a test area for the stability and appropriateness of the proposed method.
Starting with the contribution of Ohta and Satoh , several attempts have been made at modeling macroseismic intensities generated by sources of known geometry with various techniques. Among them are the kinematic function KF [Chiaruttini and Siro, 1981], the generation of synthetic seismograms by normal mode summation [Panza and Cuscito, 1982; Suhadolc et al., 1988; Pierri et al., 1993] or by ray-tracing [Zahradn'k, 1989]. Some of these investigators have subsequently tried to infer the focal parameters of historical earthquakes from intensity data (e.g., Chiaruttini and Siro ; Sirovich ), but so far the results of these attempts have not been extensively tested against instrumental data, partly due to the fact that good quality and homogeneously collected intensity data have not been largely available in revised catalogs until 1995.
Notwithstanding possible rapid developments of these techniques in the near future, we believe that at present most of them are not reliable enough for widespread application. In the absence of a physical model explaining the spatial pattern of intensity data (in particular, of a function for converting ground displacement, velocity and acceleration into felt intensity), macroseismic data alone have not been able to constrain efficiently parameters of the seismogenic source which do not have a straightforward correspondence with the observed intensity, such as the fault dip and the sense of slip (fault rake). On the contrary, the strike of the seismic source (that is, the azimuth of the seismogenic fault) is somehow related to the distribution of the earthquake effects. In the past, the fault azimuth was commonly inferred by means of a visual inspection of hand- drawn isoseismals (e.g., in Shebalin  and subsequent Shebalin-type approaches). This procedure obviously introduces a strong amount of arbitrariness, since the person in charge of drawing the isoseismals may somehow convey in the artwork his or her own preconceptions about the location and geometry of the seismogenic fault, often forcing the data to say more than they really show. More recently other workers have produced ñobjectiveî isoseismals through automatic computer contouring (e.g., De Rubeis et al. ), but also in this case all possible inferences can only be visual, and therefore subjective and almost impossible to test statistically.
A visual analysis of isoseismal lines may indeed be helpful for identifying survey blunders or anomalous intensity poits resulting from site effects. However, the statistical analysis of individual observed intensity values must be preferred when the goal of the analysis is to derive global quantitative estimates of the main source parameters. This is now a viable option in Italy, following the recent publication of two large and comprehensive databases of macroseismic data by Boschi et al. [1995, 1997] (ñCatalogue of Strong Italian Earthquakes from 461 B.C. to 1990î, hereinafter referred to as CFTI, containing over 31,000 data relative to 460 earthquakes) and Monachesi and Stucchi  (ñDOM 4.1 Catalogueî, containing over 37,000 data relative to 950 earthquakes). Unlike traditional catalogues, which for each earthquake usually report only synthetic information on the origin time, location and magnitude, these new compilations also include the list of the localities for which the Mercalli intensity was re-evaluated with homogeneous and standardized criteria.
Due to the peculiar characteristics of the historical information available in Italy, all the intensities of this database are referred to the definitions of the MCS scale [Sieberg, 1932] rather than to the more recent EMS scale [GrÄnthal, 1993] or to the Modified Mercalli scale (MM) [Wood and Neumann, 1931], which is most commonly used in North America.
For the region of our interest the CFTI catalogue lists 41 large earthquakes, for a total of over 7,500 intensity data points (see Table 1 ). When this paper was in the final stage of preparation, we decided to include in our analysis the felt reports gathered during a preliminary survey of the effects of the 26 September 1997, Colfiorito, central Italy earthquakes [WGMSCE, 1997], as it allows for an interesting a posteriori test of our approach.
Our strategy involves five steps ( Figure 1 ):
Step 1 - Locating the source
We first compute the epicenter of each of the 42 earthquakes from macroseismic data alone. The epicenter is found through an averaging technique described by Gasperini and Ferrari [1995, 1997] and already utilized in the second release of the CFTI. This technique is briefly summarized in Appendix 1 . This epicenter is then used as the origin of the reference system for locating the extended source and for analyzing the azimuthal distribution of felt intensities to determine the source strike.
Step 2 - Assessing the earthquake seismic moment
The distribution of felt intensities of each earthquake is then used to infer the earthquake seismic moment M0 and the corresponding moment magnitude M using an algorithm described by Gasperini and Ferrari . Similarly to the earthquake location problem, this algorithm has already been used in the second release of the CFTI. This algorithm is briefly described in Appendix 2 .
Step 3 - Assessing the source dimensions (length and width)
The seismic moment of each individual earthquake is then used to infer the physical dimensions of the relevant source under the hypotheses set forth above (earthquake is characteristic and representative of maximum source potential). We used Wells and CoppersmithÍs  empirical relationships to calculate the full rupture length and width of the seismogenic source. Although most of the best studied strong Italian earthquakes exhibit pure normal faulting, reverse and strike-slip faulting earthquakes are expected to take place particularly along the Adriatic margin. For this reason we used the relationships that were derived by these investigators as an average of ñAllî possible faulting styles:
Log10(RW) = 0.32(±0.02) * M -1.01(±0.10)
Step 4 - Assessing the source orientation (azimuth)
Once the source has been located and its physical dimensions evaluated, this step involves assessing its true orientation. This is accomplished by a new algorithm described in detail in the following section.
Step 5 - Representing the source
The seismic source is finally drawn as a rectangle centered on the macroseismic epicenter. The rectangle represents either the actual surface projection of the causative fault or, at least, the surface projection of the portion of the Earth crust within which the fault is more likely to be located. Since Italian faults tend to be predominantly dip-slip, as a first approximation the width of the rectangle delineating each source is plotted as if it represented the projection of a fault dipping 45ž in an unspecified direction perpendicular to the fault strike (see Figure 2a ).
Similar to any other analysis of angular data, the reliability of the circular mean rests upon the uniformity in the distribution of the data themselves. Since the angular location and the dispersion are not independent variables, a uniform distribution has no significant central value and any further statistical analysis is therefore generally meaningless. A number of statistical tests are available to analyze the uniformity of a circular distribution. Among them the Kuiper test proved preferable for small datasets, while the Rayleigh test proved most powerful where the distribution of the parent population is Von Mises-type [Rock, 1988]. The mathematical details of the procedure for calculating the circular mean, the associated standard deviation and the significance levels of the distribution uniformity tests are described in Appendix 3 .
Before running the algorithm with real data we must select an appropriate lower threshold for the macroseismic intensity of the data points to be included in the averaging process. Ideally, the dataset of each earthquake source should include only the localities where the observed intensity is largest. In real applications, however, such maximum effects often occur at a limited number of scattered sites as a result of local amplifications induced by the near- surface geology, of particular characteristics of the local buildings, of focusing of the seismic energy or constructive interference of wave- trains from different portions of the source. Under such circumstances the highest intensities of a given earthquake may represent outliers in the data distribution, in which case the source is more correctly represented by the pattern of sites that experienced an intensity one or even two degrees lower than the maximum observed. This apparent dualism is well established in modern earthquake catalogues, which usually separate the epicentral intensity I0 from the maximum observed intensity Imax (e.g., Boschi et al. ).
A rational criterion to choose the intensity threshold is to select a value such that the average epicentral distance of sites having a larger intensity is comparable to the fault size. As we have seen in Step 3, we can use Wells and CoppersmithÍs  empirical relationships to calculate an approximate source length as a function of M. We can then pick the intensity threshold that gives the best equivalence between half of the fault length and the average distance of the data points from the epicenter. Nevertheless, our experience shows that the plain application of these criteria may lead to retain data points having an intensity more than two or three degrees lower than the maximum intensity. This condition may be (1) the effect of the presence of strong intensity amplification effects, (2) the result of incompleteness of the macroseismic field and hence of mislocation of the true epicenter (e.g., when this occurs offshore or close to the shoreline), or (3) the result of overestimation of the earthquake magnitude. We therefore decided to establish a lower bound for the intensity threshold to prevent the inclusion of intensities data which are too low to be representative of the source orientation. Based on our experience with the dataset analyzed in this paper, we set this lower bound at one degree below the maximum intensity plus uncertainty (for example, for an Imax equal X we allow the intensity threshold to reach intensity VIII-IX). This is also the maximum value normally attained by the difference between the epicentral intensity I0 and the maximum intensity Imax.
An additional important issue is the choice of an appropriate distance weighting scheme. Under the assumptions mentioned above and for any given intensity, the farther a certain site, the higher the probability that the azimuth of that site approximates the strike of the fault. We assume that this probability is proportional to some function of the distance, normalized by the average epicentral distance of all data points having the same intensity. This function should be somehow related to the attenuation of the intensity with distance. A simple relation, which proved to fit well the attenuation of earthquake intensity for the Italian territory was proposed by Berardi et al. . This relation, termed CRAM (Cubic Root Attenuation Model), is given by the following expression
To evaluate the overall reliability of Step 4 of our modeling approach we performed a stability analysis by comparing the results of different weighting schemes. Reasonable choices for this test include:
[a] no distance weighting (all data are assigned the same weight);
[b] cubic root of distance weighting (see above);
[c] distance weighting (weight is proportional to the normalized distance of the point from the epicenter).
Similarly, we tested different lower bounds for the intensity thresholds according to the following schemes:
[d] zero degree lower bound (which implies that only data points where maximum intensity is observed are used);
[e] one degree lower bound (see discussion above);
[f] no lower bound (all available data could be used).
Notice that in both cases we are essentially comparing the results of our preferred or ñcentralî schemes, indicated by [b] and [e] and already described in the text, with those obtained using two extreme scenarios.
Finally, we need to define the minimum size earthquake for which the method can be used with confidence. This step is crucial since the analysis of earthquake sources comparable in size with the average distance between the sites used to estimate the azimuth could return meaningless results because of the low ñresolving powerî of the data distribution itself. Based on the average spacing of historical settlements in Italy, we assume this minimum fault length to be somewhere between 5 and 10 km, which corresponds to a moment magnitude of 5.3 and 5.8 respectively [Wells and Coppersmith, 1994]. We therefore decided to analyze only earthquakes for which M> 5.5. Good candidates must also be characterized by at least 5 data, which is the minimum figure for which the statistical tests hold rigorously. This condition applies to 27 out of 42 earthquakes of magnitude 5.5. and above reported by the CFTI for the region of interest (see column NAz in Table 1 ). To maximize the use of available data, however, we tentatively extended the application of the algorithm also to 9 additional events for which at least 3 data points are made available by the selection criteria.
A similar procedure was followed for all the 42 M>5.5 central and southern Apennines earthquakes that we selected to test our approach. Figure 4 and 5 show the location, extent and orientation of the inferred sources. These earthquakes occurred between 1600 AD and present, and are shown by rectangles constructed using exclusively historical information following the five steps of our modeling scheme. We wish to recall that such rectangles comprise a synthetic representation of the source that is coherent with standard schematizations based on instrumental or field evidence. Figure 4 shows the results obtained using different distance weighting schemes, while Figure 5 shows the effect of different choices of different lower bounds for the intensity threshold. Under the relatively strict requisites of our preferred schemes ([b] and [e]), for 6 out of 42 earthquake sources we could calculate only the location and size but not the azimuth (all of them have M <6.0), and for this reason they are shown with circles in which the diameter is the estimated fault length (except for three solutions obtained with the more tolerant scheme [f], for which a rectangle is shown).
For many sources the estimated azimuth does not differ much for different distance weighting schemes (see Figure 4 ). The most evident discrepancy concerns the large 1627a, Gargano earthquake, which appears to vary from a trend almost parallel to the Apennines (from about N60ÁW for the [a] and [b] schemes to about N30ÁE for the [c] scheme). Less pronounced differences (within 10Á) can be observed for some of the largest earthquakes such as the 1980, Irpinia, the 1915, Avezzano, the 1732, Valle Ufita, and the 1703a, Norcia. In general, the algorithm seems quite stable for different weighting schemes; in particular, in almost all cases the [b] scheme returns a result that is intermediate with respect to the other two, and for this reason we decided to regard it as our best choice.
Choosing different lower bounds for the intensity thresholds (see Figure 5 ) also does not appear to return drastically different results. The solutions obtained using the [e] and [f] schemes are almost coincident for most of the earthquakes. The only significant difference concerns the 1703a, Norcia earthquake and three relatively small (M<6.0.) earthquakes (1762, 1786 and 1851b), for which the azimuth can only be computed using the [f] scheme. On the contrary, larger deviations exist between the [d] scheme and the other two; the largest of them again concerns the 1627a, Gargano earthquake, the source of which varies in orientation by nearly 90Á from one scheme to another. For two other large earthquakes (1688, Sannio and 1980, Irpinia) the deviation ranges between 10Á and 20Á. It should also be noted that in 21 cases (versus 6 for scheme [e] and 3 for scheme [f]) the azimuth cannot be computed with the more demanding scheme [d] due to insufficient number of data points (less than 3). In contrast, the algorithm is rather stable with respect to the other two schemes. This suggests that the choice of using only data having the same intensity as the epicentral intensity (scheme [d]) is too restrictive and represents an unjustified limitation of the applicability of the algorithm. We therefore decided to assume the [e] scheme, which is also more plausible from the point of view of the physics of the problem, as the most reasonable and reliable choice.
Our preferred solutions are shown by white rectangles enclosed by a solid line in Figure 5 and are listed in Table 1 along with our estimated macroseismic epicenter and moment magnitude M. For each earthquake Table 1 also reports the significance level (s.l.) obtained from the Rayleigh and Kuiper distribution uniformity tests (see Appendix 3 ). In most cases the Kuiper test allows the H0 hypothesis that the data distribution is uniform to be rejected at least at s.l.<0.05, and therefore the source orientation can be estimated with confidence. For 11 earthquakes the significance level is larger than 0.05 and the H0 hypothesis cannot be confidently rejected. We could tentatively reject the H0 hypothesis for 4 of these 11 earthquakes where s.l.<0.10, whereas for the remaining 7 events the results must be considered with caution. The reason why we do not simply discard these results is because for most of these events the test statistics are not rigorous as the source azimuth was computed using less than five intensity data. In turn, the Rayleigh test does not allow the uniformity hypothesis to be rejected for about half of the computed azimuths (it can be rejected tentatively at s.l.<0.10 for 5 of them). At least some of these failures, however, could be ascribed to significant departures from a Von Mises-type distribution (which is not established for our data) more than to actual uniformity in the distribution of the data. At any rate, for both tests most of the failures concern moderate-sized events (M<6.0), hence smaller sources for which the azimuth is more difficult to estimate.
Table 2 shows a summary of the comparison between these instrumentally or geologically derived azimuths and the results of our intensity-based computations. In general the agreement is quite satisfactory. In particular for the 1980, Irpinia earthquake our estimate is very close (within 10Á) to the orientation of both of the CMT nodal planes and of the geologically inferred fault. For the 1979, Valnerina earthquake our solution is almost coincident with one of the two nodal planes, but unfortunately no geological or seismological evidence is available to date to decide which is the actual rupture plane. For the 1915, Avezzano earthquake the maximum difference is 13Á. For the remaining two earthquakes (1962, Irpinia and 1984, Val Comino) our result lies almost in the middle of the two instrumental solutions, with differences in the order of 20Á-30Á. Also for these two events no conclusive evidence exists to date as to which of the two nodal planes is the actual rupture plane.
For three especially well documented earthquakes we decided to extend the comparison to the full definition of the seismogenic source. Figures 6a , 6b and 6c summarize the results of a comparison of evidence available respectively for the 1915 Avezzano, 1980 Irpinia, and 1997 Colfiorito earthquakes versus the estimates derived in this paper. The following discussion focuses on the most evident discrepancies that come out of this exercise. For the source parameters that are fit satisfactorily the reader may refer directly to the information shown in Figure 6a , 6b and 6c .
The 1915, Avezzano (central Italy) is the second deadliest earthquake of Italian history ( Figure 6a ). The concentration of population in the depression of the former Fucino Lake, which had been reclaimed in the 1860Ís and soon after re-utilized for extensive agricultural development and for new settlements, and the widespread amplifications of the ground motion induced by the particular configuration of the area, conspired in turning this earthquake into an immense catastrophe. Perhaps for this reason in current catalogues this earthquake is characterized by a large number of localities which were assigned intensity XI (see Figure 6a ). This circumstance has driven the inferred M up to 6.9, which implies a nearly 40 km-long causative fault. The geodetic model proposed by Ward and Valensise  implies a M 6.6, but this is a minimum figure as it is based on the portion of the fault that could be resolved by observations of coseismic strain. In view of this limitation and given the extent of the observed surface ruptures, we may conclude that the true M of the 1915 earthquake was between 6.7 and 6.8.
Part of the misfit in the orientation of the fault could be accounted for by the northwestward propagation of the coseismic rupture [Berardi et al., 1995] and by the lack of settlements to the north and south of the epicenter.
Our intensity-based source for the 1980, Irpinia earthquake ( Figure 6b ) is quite surprising for it fits the real seismic source nearly to perfection except for its location, which is shifted to the northwest by about 8 km. Indeed the macroseismic solution could not capture the intrinsic complexity of the earthquake rupture, that was characterized by at least three discrete subevents occurring within a 40 s time span, but it somehow responded to the northwestward propagation of the rupture (e.g., Bernard and Zollo ), which caused an asymmetry in the distribution of the highest reported intensities with respect to the location of the seismogenic source.
The Colfiorito earthquakes ( Figure 6c ) make an especially interesting case as they occurred immediately after the modeling procedure and its parameters had been firmly established based on the experience gained from the rest of our dataset. The analysis uses the results of a preliminary survey of the earthquake [WGMSCE, 1997] completed on 2 October, that is, a week after the mainshocks, because the damage pattern was soon after worsened by a series of strong aftershocks (M>5), which effectively extended the region that ruptured during the sequence.
The main limitation of our macroseismic solution is represented by its inadequacy to account for multiple ruptures occurring closely spaced in time. Unlike the case of 1980, when the moment release was dominated by the first mainshock subevent, the two mainshocks of the Colfiorito sequence were comparable in size and are presumed to have ruptured in opposite directions, generating a pattern of cumulative damage that does not fully reflect the actual energy release. We believe that, had the two shocks occurred separated in time by a few years, our approach would have retrieved the correct extent of each individual source.
A general conclusion from a simple visual inspection of Figure 5 is that the main earthquake sources of this region align along the crest of the Apennines within a <50 km-wide corridor, suggesting the existence of a relatively simple yet extremely continuous seismogenic belt. This intriguing circumstance was first pointed out exactly 150 years ago by Perrey  based on a qualitative examination of intensity data, and has later become the basis for the development of modern earthquake recurrence models for the region (e.g., Valensise et al. ). Notable exceptions are represented by the 1627, 1646, 1731, 1881, 1943, 1948 earthquakes, which following Frepoli and Amato [1997a, 1997b] could be interpreted as the manifestation of the existence of an active compressive belt rather well separated from the main active extensional belt straddling the crest of the Apennines.
A subsequent observation is that there appears to be limited overlap between adjacent sources. This condition supports the earlier assumption that our dataset of 42 large historical earthquakes is representative of as many individual sources belonging to a segmented belt. In conjunction with additional tectonic and instrumental evidence, this circumstance may form the basis for a systematic search of potential gaps in historical seismic release throughout the investigated region.
The combination of a mildly heterogeneous tectonic regime, some scatter in the input data and some instability in the processing algorithm could indeed be reflected in a tendency for the investigated sources to exhibit a rather scattered orientation. Quite surprisingly, no such tendency appears from the results shown in Figure 5 ; on the contrary, most of the sources seem to align in a rather orderly fashion along the trend of the Apennines. In particular, while the main sources of the southern Apennines all trend between N40ÁW and N60ÁW, the sources inferred for the two largest shocks of the 1703 sequence (1703a and 1703b in Figure 5 ) seem to testify a known transition from the N70ÁW-trending northern Abrutii tectonic structures to the decidedly more north- south trend of the Umbria-Marche Apennines (see for example Cello et al. ). The only significant departures from the general trend concern smaller-size, less constrained earthquakes such as the 1639, 1646, 1853, 1873, 1881, 1904, 1933, 1948. Following the interpretation proposed by Valensise et al. , at least some of these earthquakes (particularly those closer to the axis of the Apennines) might reflect the activity of known transverse tectonic lineaments (that is, perpendicular to the main trend of the Apennines) which are known to predate the onset of the present stress regime.
Overall, our modeling results are compatible with the general notion that the central and southern portions of peninsular Italy are actively extending in a direction perpendicular to the local strike of the Apennines. This circumstance has been qualitatively known for some time based on conventional geological evidence (e.g., Scandone ), but it has been recently demonstrated to hold also for present-day tectonics by Valensise et al.  and Amato and Montone , respectively based on the analysis of the largest historical earthquakes comprising the central and southern Apennines segmented seismogenic belt and on a careful examination of direct indicators of the modern stress field (earthquake focal mechanisms and borehole breakout data). The uniformity of the trend delineated by our intensity-based sources and its consistency with the information supplied by several independent lines of evidence represent an implicit validation of the approach itself.
We tested our approach using Mercalli intensity felt reports taken from the ñCatalogue of Strong Italian Earthquakes from 461 B.C. to 1990î electronic database [Boschi et al., 1997] for 41 events that occurred in central and southern Italy between the year 1600 and 1984 and for the 26 September 1997, Colfiorito (central Italy) events.
The approach turns the intensities reported for a given earthquake into a three-dimensional extended source of specified size, location and orientation, ready to be used for deterministic modeling of ground motion or for any other calculation of seismic hazard. The procedure is quite stable for different choices of the algorithm parameters and, in most cases, provides a result which is statistically significant against the hypothesis of uniform distribution of the data. The computer code used for the calculations was written in Fortran and will be available upon request.
The resulting pattern of source orientations for the central and southern Apennines is rather homogeneous and fully compatible with the prevailing style of local tectonic deformation of the area, which shows extension consistently trending perpendicular to the axis of this section of the Italian peninsula. Our results agree fairly well with instrumental or surface faulting evidence available for a limited number of large earthquakes, with typical inaccuracies in the order of 5-10 km for the source location, 0.2-0.3 moment magnitude units for the source size, and 10Á-15Á for the source orientation.
A significant limitation of the approach is represented by its inability to filter out possible distortions of the macroseismic field associated with source directivity or extreme source complexity. This characteristic, however, could eventually be turned to our advantage for exploring the dynamic properties of the source if independent information on the rupture timing and propagation direction becomes available. Nevertheless, the examination of the information available for modern earthquakes shows that in several cases the inaccuracy of our intensity-based estimates is comparable with the uncertainties in instrumental determinations.
In spite of this limitation, the overall arrangement of the inferred sources suggests only minimal overlap between adjacent sources and highlights "missing" source areas that may be incorporated in current fault segmentation models and should become the locus of more focused investigations in the future.
Many of the concepts in this paper benefited from discussions with D. Albarello, G. Ferrari and W. Marzocchi. We acknowledge thoughtful and encouraging contributions by A. Johnston and by an anonymous reviewer. This work was partially supported by the Italian Ministry for the University, Scientific Research and Technology (MURST).
The algorithm operates through five steps:
1) The data are subdivided into classes of intensity; data in between classes are assigned to the lower intensity class (e.g., an intensity VII-VIII is assigned to class VII);
2) Localities that reported the maximum observed intensity (Imax) are set aside for subsequent calculations;
3) If the maximum observed intensity falls in between full values (e.g., IX-X), the localities that experienced the lower full value are selected in addition to all those where Imax was reported (e.g., all intensity IX-X plus all intensity IX data);
4) If the number of selected localities is less than 3 the dataset is extended to include data from the lower intensity classes up to Imax minus one degree;
5) The epicentral coordinates are calculated as the 25% trimmed average, that is, the average of all values included in the interval between the first and third interquartiles, of the coordinates of all selected localities.
All arbitrary parameters of the algorithm, such as the minimum number of data to be included, the minimum intensity threshold and the central tendency estimator, were selected by Gasperini and Ferrari [1995, 1997] based on a stability analysis.
Unfortunately the uncertainty associated with the algorithm cannot be estimated a priori nor a posteriori, but only assessed in terms of internal consistency of the full procedure. For this purpose, the sum of square residuals between the coordinates of the selected localities and the coordinates of the inferred epicenter are calculated for every earthquake. This parameter cannot be directly used as an estimate of the uncertainty in the location because it also reflects the size of the mezoseismal area, however, it may be used as a parameter controlling the reliability of the estimate: for example, a large value may implicitly indicate the existence of highly anomalous intensity points or the incompleteness of data distribution (e.g., when the epicenter falls offshore or in a sparsely populated area).
Epicentral Intensity method. Due to the limited number of intensity data normally available for smaller or older earthquakes, until recently the most commonly used procedure involved fitting magnitude estimates of recent earthquakes to their epicentral intensity by a least squares regression, and then using the regression parameters to infer the magnitude of pre-instrumental events. The empirical relationships obtained from these types of analyses are normally characterized by an enormous dispersion due to the inherent uncertainty of the intensity parameter and by the uncertainty on the ipocentral depth. Since for the same size earthquake the deeper the source, the smaller the effects at the Earth surface, the magnitude of slightly deeper earthquakes is systematically underestimated by these methods, while that of shallower earthquakes is overestimated.
Isoseismals and Felt Area method. The area or average radius of individual isoseismals, or of the Felt Area, is indeed a much better estimator of the earthquake magnitude than a single intensity value [Toppozada, 1975]. Using this type of data, which reflect a combination of many spatially distributed observations, the irregularities of the intensity field due to source directivity, to the anisotropy of the seismic attenuation, or to site effects, are averaged out and hence strongly reduced. Even with this method, however, the source depth may play a significant role because it controls rather strongly the decay of intensity with distance.
Mixed method. This approach was first proposed by Galanopulos  based on the observation that the two previous criteria are affected in opposite directions by the unknown depth of the source. Sibol et al.  applied a variation of this method to compute the magnitude of historical earthquakes in the United States. The mixed method is certainly preferable when spatially distributed intensity data are available. The data supplied by the CFTI [Boschi et al., 1995, 1997] form an opportunity to test this technique for most of the strong Italian earthquakes of the past four centuries. The algorithm, originally developed by Gasperini and Ferrari [1995, 1997], attempts to compute an ñequivalent magnitudeî, that is, an estimate compatible with those that can be derived rigorously only by the usual instrumental procedure. We used the functional relationship derived by Sibol et al. , modified to use not only FA data but also the area of individual available isoseimals.
The algorithm operates through six steps:
1) the macroseismic epicenter is computed for each earthquake (see Appendix 1 );
2) the epicentral intensity I0 is assumed to be equal to the observed Imax if at least two data with that intensity value are present; otherwise I0 is set to the second highest observed intensity value (with a lower bound represented by Imax minus one degree);
3) for each earthquake the distance between the macroseismic epicenter and each of the sites for which intensity is available is computed. This dataset is then used to derive the average distance RI for each intensity value (using the median as central tendency estimator);
4) using the instrumental magnitudes of Italian earthquakes revised by Margottini et al , a bi-varied weighted regression law of the type proposed by Sibol et al.  is fitted over the dataset of instrumental events for each intensity level:
5) through the regression parameter computed in the previous step (one regression for each isoseismal intensity), the magnitude and corresponding uncertainty is computed for all the other earthquakes of the database and for each intensity level for which the average distance can be evaluated;
6) the equivalent moment magnitude M is computed for each earthquake as the weigthted average of the values estimated using the RI obtained for different intensity levels. The weight of each estimate is taken as the inverse of the squared error, and the uncertainty of the resulting magnitude is given by the inverse of the square root of the sum of weights. The actual values of the regression coefficients and the parameters of the goodness of fit for each regression are given in Gasperini and Ferrari . In most cases the coefficient of variation R2 is larger than 80%.
Finally, a regression analysis is performed to evaluate the correspondence between macroseismic and instrumental magnitudes, using the dataset of 75 earthquakes that were used for tuning the initial empirical regression. The coefficient of variation R2 is 74% and the mean square residual equal 0.28 magnitude units. The same analysis performed over a reduced dataset of 14 earthquakes for which a reliable direct measure of seismic moment is available yields R2 =91% and mean square residual equal 0.17 magnitude units.
and of the significance level of the uniformity tests