Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Comparison of generalized inversion and Pujol’s methods in determination of local magnitude scale (ML) for Central Alborz
1
11
FA
Reza
Emami
M.Sc. in Geophysics, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
Reza
Rezaei
M.Sc. in Geophysics, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
Mehdi
Rezapour
Associate Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
10.22059/jesphys.2013.35178
The idea of measuring the size of an earthquake by means of an instrumental estimation of the energy released at the focal point led Richter (1935) to the creation of the first scale of magnitude. The concept of magnitude is based on the fact that amplitudes of seismic waves depend on the energy released at the focal point after it has been corrected for their attenuation during their propagation. Distance-correction function with the assumption
<sup>*</sup>نگارنده رابط: تلفن: 09141761984 دورنگار: 88630479-021 E-mail:rezaemami@alumni.ut.ac.ir
that when a maximum amplitude of 1 mm is observed at a distance of 100 km, M<sub>L</sub> = 3.0; There are several different approaches to invert the empirical distance-correction functions to the local magnitude scales (e.g., Kanamori and Jennings, 1978; Hutton and Boore, 1987; Anderson, 1991). In this study we use the approach suggested by Hutton and Boore (1987). Following Hutton and Boore (1987), empirical attenuation curve can be expressed with an explicit distance-correction function. In distance-correction function n and k are parameters related to the geometrical spreading and anelastic attenuation. After rearranging distance-correction function we can be cast into a standard matrix formation that represents a typical linear inversion problem in geophysics that can be solved using least-squares, maximum likelihood, or generalized inversion methods (e.g., Aki and Richards, 1980; Menke, 1984; Lay and Wallace, 1995; Aster et al., 2005). In this study, used both generalized inversion and Pujol’s methods for inversion to determine the empirical attenuation curve in the Central Alborz region.
Amplitude variation against distance in recent inversion methods is the basic approach for determination of the magnitudes of a number of earthquakes, site-specific correction terms for each of the recording stations, and the two constants that can be obtained in one step and simultaneously, result there is a trade-off between magnitude and station corrections. It is possible to determine these parameters in two steps without trade-off. In this method, we will use the separation of parameters technique introduced by Pavlis and Booker (1983) as modified by Pujol (1988, 2000). Therefore we used two, Generalized Inversion and Pujol’s methods (Pujol, 2003) for calculating the empirical attenuation relation and local magnitude (M<sub>L</sub>) in the Central Alborz, northern of Iran.
We used a large dataset of 3886 events including 62523 waveforms which were recorded by Tehran, Semnan and Sari seismic networks during 02/03/1997 to 13/03/2011. These seismic networks comprise of 19 three component stations. We calculated synthetic W-A seismograms by removing the instrument response of each record and convolving the resulting signal with the response of the standard W-A torsion seismograph. We assumed a static magnification of 2080 for the W-A instrument (as shown by Uhrhammer and Collins 1990, the W-A instrument has a magnification of 2080 and not 2800 as often assumed). Based on Richter’s method we used amplitudes which are arithmetic means of those of horizontal components. Therefore maximum zero-to-peak amplitude was then measured on both horizontal synthetic seismograms. For a given event, the M<sub>L</sub> is independently calculated for each recording station. The values of M<sub>L</sub> from each station are averaged to give the magnitude of event, then magnitude residuals obtained from the attenuation curves of this study were plotted as a function of hypocentral distances, and the results obtained from the two methods are very similar.
Eventually, the corresponding values of geometrical spreading parameter (n) and inelastic attenuation parameter (k)are 0.9819 and 0.0028 and 0.9073 and 0.0035 respectively from Generalized Inversion and Pujol’s methods. The two methods yielded similar results. But due to the reasons mentioned the result obtained by Pujol’s method was chosen as the final result. Station corrections are related to the local ground conditions and instrument installation (Richter, 1958). A station with positive correction will yield a smaller ground-motion value than a station with a negative correction for any seismic event before the station corrections are applied. In other words, a station with a negative correction will amplify seismic waves compared to a station with a positive correction for the same event when the instrument installation conditions are the same. The station corrections resulted from this study vary between -0.378 to 0.725 suggesting that the local site effects may have a strong influence on the amplitudes.
Generalized inversion method,Pujol’s method,Local magnitude (ML),Empirical attenuation curve,Central Alborz
https://jesphys.ut.ac.ir/article_35178.html
https://jesphys.ut.ac.ir/article_35178_7c1216a2220441a3cd55472f7ca26408.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Determining the slip rate on the Gowk fault using POST-IR method
13
28
FA
Morteza
Fattahi
Assistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
mfattahi2@ut.ac.ir
Nasrin
Karimi Moayed
M.Sc. Student in Geophysics, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
Richard
Walker
Assistant Professor, Department of Earth Sciences, University of Oxford, UK
walker@mailanator.com
Morteza
Talebian
Assistant Professor, Research Institute of Earth Science, Geological Survey of Iran
talebian@mailanator.com
10.22059/jesphys.2013.35179
<sup>*</sup>نگارنده رابط: تلفن: 61118253-021 دورنگار: 88630479-021 mfattahi@ut.ac.ir:E-mail
Iran is one of the most tectonically active regions on the Alpian-Himalian earthquake belt (Figure 1). The Gowk fault located in Kerman Province, eastern Iran, is a part of Sabzevaran-Gowk-Nayband system of strike-slip faults that accommodate north-south right-lateral shear along the western margin of Dasht-e-Lut. Its length is more than 150 km. collectively; the northern part of the fault has been ruptured by five destructive earthquakes between 1981 and 1998. No activity has been seen in the southern segment of the fault. So the southern segment which is our site of study will remain a potential for further earthquakes.
Estimating the slip rate of a single fault is one of the most important parameters to assess the hazard of that fault. In order to determine the slip rate, two parameters, displacement and the duration of the displacement, are needed. In the south Golbaf basin the fault is composed of three main strike slip segments arranged in a right-stepping pattern (Figure 3). Field investigations illustrated around 30 m right-lateral displacement on the fault. We used two main approaches for dating, radiocarbon and optically stimulated luminescence. Two <sup>14</sup>C samples and three OSL samples were collected at a 3 m-high exposure of the lakebed on the eastern side of the fault at 29:47:30 N 57:46:28 E (Figure 4, Figure 5). The first quantitative estimate of the Holocene slip rate on the Gowk fault was provided using <sup>14</sup>C dating result of the two wood fragments which were taken with OSL samples (Walker et al., 2010).
In this study, we have tried to determine the slip rate of Gowk fault by means of luminescence dating. Luminescence dating is a chronological method that has been used extensively in the earth science. In this method, the event being dated is the last exposure of the sample to daylight. Hence the determined age is the time of the sedimentation which has covered the older sediments. As fault has displaced the rivers in the Golbaf Lake, and the rivers have cut the existing lake bed sediments, therefore last activity of the Gowk fault has occurred after the last sedimentation in the lake. If we date the age of last sedimentation of the Golbaf Lake, we will be able to calculate the fault slip rate, using the relevant age and displacement.
Luminescence dating is based on the emission of light (natural luminescence signal) by commonly occurring minerals, principally quartz and feldspar. These minerals act as a dosimeter in nature, recording the amount of radiation to which they have been exposed according to the decay of radioactive isotopes such as uranium (U), thorium (Th) and potassium (K).
To date a sample using one of the luminescence dating methods, two parameters, the equivalent dose and dose rate, are needed. Single aliquot regeneration (SAR) protocol was used to determine the equivalent dose (De). Ideally, after chemical preparation, we have a sample of just quartz grains. However, this is not always the case. We sometimes face feldspar contamination which means all the feldspar grains have not been removed. Underestimating the age would be a consequence since we are dealing with anomalous fading. This means that the size of the observed luminescence signal decreases as the sample is stored in nature or the laboratory. To identify the purity of quartz in the aliquot, we usually introduce a simple Post IR measurement in the end of SAR experiments. This is a problem if the infrared signal (emitted from feldspar) is more than 10% of the blue signal (emitted from quartz). To sort out this problem we reject the result of that aliquot. However, if the majority of aliquots show this problem, no reliable De can be calculated. For quartz samples that demonstrate such problem after sufficient time of HF etching, the alternative way would be to use POST-IR method. As Golbaf samples suffered from this problem we applied POST-IR method to find the De for these samples. However, some factors, such as the ability of SAR to correct the sensitivity change and recovering the given lab dose were checked in order to insure us that the age results achieved by SAR protocol are trustworthy. Equivalent doses were calculated by analyzing the data with Analyst software. The results are shown in table1. By considering the equivalent dose calculated from the Histogram method and using the following formula, the age of the collected samples was determined:
Age (ka) = equivalent dose (Gy) / dose rate (Gy/ka) (1)
The results of the dose rates and ages for the three samples (GB1, GB2 and GB3) are shown in table2.
By considering the ages calculated for the three samples and their depths, and extrapolation diagram, we could find the age to be 2800-5400 yrs at the surface (diagram1). So assuming the time of faulting to be close to the age of the lake surface, the slip rate of the Gowk fault would be 5.5-10.7 mm/yr. It should be mentioned that this age (2800-5400 yrs) is less than what can be predicted from 14C and there is a possibility of the effect of fading, and as a result the slip rate is more than the estimated slip rate by Walker et al (2010). We suggest dating these samples using potassium feldspar grains, to enable us for comparison between dating result.
Because of the complexity of the fault zone in depth, estimating the average return period for the Gowk fault is difficult. However, by assuming a 3-meter slip in every earthquake according to the 1998 Fandogha earthquake, and considering the calculated slip rate, the maximum return period will be 280-540 years. According to this short return period and the fact that the southern part of the fault has not recently generated a destructive earthquake, it will remain a potential for making a destructive earthquake in the region.
Gowk fault,Slip rate,Luminescence
https://jesphys.ut.ac.ir/article_35179.html
https://jesphys.ut.ac.ir/article_35179_4ef1a243c4c4c300938e1c5937e8c421.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Relative travel time estimation for different seismic phases using total variation (TV) regularization method
29
38
FA
Fatemeh
Roostaee
M.Sc. Student of Geophysics, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
Ali
Gholami
Assistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
91577787
Ahmad
SadidKhouy
Assistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
10.22059/jesphys.2013.35182
<sup>*</sup>نگارنده رابط: تلفن: 61118292-021 دورنگار: 88630479-021 agholami@ut.ac.ir E-mail:
<em> </em>
The ill-posed inverse problems play an important role in various fields of geophysical studies. Basic information about geophysical models (the unknown parameters) is needed to find a unique and stable solution to such problems. Recent progresses in computational methods and advances in analysis of real world signals provide us suitable tools for extracting more detailed information about geophysical models from noisy, uncertain observations (recorded data). In this paper, we study relative travel time estimation of the individual seismic arrival times using total variation (TV) regularization, a regularization method which has recently attracted much attention of scientist for reconstruction of models having sharp boundaries.
Seismic waves convert to different phases when passing through the boundary of earth layers having different geological properties and structure of the materials though which the waves propagate. For example, seismic travel times can be used to inversely determine the velocity field of the area under study via tomography. Furthermore, boundary layer structure at the core-mantle boundary (CMB) can be investigated using SKS and SPdKS timing difference that are recorded by a broadband seismometer arrays. Therefore, accurate measurement of the travel time of seismic phases or their differences is very important. However, robust measurement of the travel time is often difficult, specifically when data are contaminated by noise and lacks clearly defined onsets. Travel time of a particular phase can be determined by several methods including cross-correlation technique and hand picking. The former is done by cross-correlation between the signal of interest and a reference phase. Determination of the reference phase is a major challenge as the accuracy of the process depends significantly on the similarity of it to the desired phase which is to be studied. Hand picking is also challenging because background noise often obscures confident identification of signal initiation. Furthermore, seismic phases are often altered due to scattering, attenuation, multipathing, or anisotropy, making accurate measurements of their travel times even more difficult. Considering these, a preprocessing of the signal is required to improve the signal to noise ratio and sharpen signal onsets such that the process of determining arrival time is more robust.
In this paper, we address the problem of accurate determination of seismic arrival times or relative times of different phases. We formulate the effects of source, attenuation, and receiver structure by convolution with a skewed Gaussian and try to remove those effects from the seismogram via deconvolution. Deconvolution is a longstanding problem in many areas of signal and image processing with applications in astronomy, remote-sensing imagery, medical imaging, and other fields working with imaging devices.
Two possible deconvolution scenarios are nonblind, where the Gaussian function, considered for degradation, is assumed to be known a priori, and blind, where the Gaussian function is not known a priori. Even in the presence of the perfect degradation, restoration of the path from the observed seismogram is an ill-posed problem needing an appropriate prior to make the solution unique and stable. Numerous algorithms have been developed to address the problem; including least squares type Wiener filter and more sophisticated regularization methods like total variation (TV) regularization. Although the methods can provide satisfactory results, they are generally nonblind and therefore require a good knowledge of the Gaussian blurring function in order to work properly. In reality, however, the degradation function is not known with good accuracy. Therefore, it should also be estimated during deconvolution making the problem more ill-posed.
The blind deconvolution which is used in this paper uses a sequential approach where the Gaussian function is first estimated from the data via an L-curve analysis. The estimation of first step is later used in combination with TV yields a piece wise constant reconstruction and preserves the edges of the signal that is important for defining onsets and so accurate measurement of the travel time of seismic phases. TV helps to stabilize the deconvolution and at the same time preserve the discontinuities of the solution. It improves the signal to noise ratio, sharpens seismic arrival onset, and acts as an empirical source deconvolution, thus enables more reliable relative travel time estimation of phase initiation.
Instead of the conventional l1-norm used for TV functional, here, we use a more sparsifying potential function for the purpose of sharpening the phase onsets. Due to simplicity and good convergence, an Iteratively Reweighted Least Squares (IRLS) method is used for optimizing the generated objective function.
Two different examples are used to investigate the performance of the proposed algorithm: (1) signal restoration of SKS and S (or Sdiff) in synthetic seismograms and (2) the restoration of actual data for 30 seismic recording of a deep focus south American earthquake. The obtained results confirm high performance of the proposed method in calculating time difference of these phases.
Total variation (TV),Deconvolution,IRLS algorithm,Core-mantle boundary CMB,SKS,SPdKS
https://jesphys.ut.ac.ir/article_35182.html
https://jesphys.ut.ac.ir/article_35182_ac6fa9e78d48502e0c51e85acadff8ed.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Single-step estimation of porosity using stochastic inversion algorithm in a south-western oil field of Iran
39
49
FA
Mostafa
Abbasi
M.Sc. Student of Geophysics, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
Mohammad Ali
Riahi
Assistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
52515757
10.22059/jesphys.2013.35183
In exploration seismology, estimation of elastic parameters of rocks using seismic amplitudes is considered as inverse problems. Nowadays, model-based and more generally optimization-based algorithms are of the most common methods of seismic inversion. These algorithms which are known as deterministic methods, are suffering from two main problems;
First, vertical resolution of outputs is very low. The estimated outputs of these methods contain the same frequency content as the seismic bandwidth. So the resolution of estimates will be same as that of seismic data.
Second, these methods are not able to prepare any estimation of uncertainty in calculation of output model. That because the deterministic methods generate only one realization of acoustic impedance that is considered as the most probable model.
In addition to these problems, the output model of deterministic methods is very smooth estimation of acoustic impedance which makes it inappropriate for continuity evaluation of events and volumetric calculations.
In order to solve these shortcomings of deterministic methods, a new algorithm known as stochastic inversion has been proposed which in most common situations, uses Sequential Gaussian Simulation (SGS) of acoustic impedance logs to prepare multiple realizations of acoustic impedance, in such a way that all these realizations are compatible to seismic data.
According to this method, first of all a random path is selected through the seismic path. In each grid node of the selected path a pseudo-acoustic impedance log is simulated by SGS method. This pseudo-log is then convolved with the extracted wavelet to produce a synthetic seismogram. In the next stage the synthetic seismogram is compared with the real one to measure the misfit. If the calculated misfit is less than a threshold value, the simulated impedance log will be selected and added to the data set to be used for simulation of future nodes. On the other hand, if the simulated log does not satisfy the maximum allowable misfit, then a new pseudo-impedance log will be simulated until an allowable impedance log is created. This procedure is repeated for all grid nodes of the selected path until the seismic grid is filled with acoustic impedance logs.
The above procedure is then repeated using different random paths. As a result, multiple realizations of acoustic impedance will be created that all of them are compatible with original seismic data.
Since the output realizations of stochastic inversion are fundamentally simulated models of well logs, it would be obvious that the resolution of these models will be controlled by well logs which are highly more resolvable than seismic data. The ability of stochastic inversion in generation of multiple realizations, make it possible to evaluate uncertainties in estimations.
In the current study, the algorithm of stochastic inversion is modified to estimate the porosity instead of acoustic impedance. According to this algorithm, neutron porosity logs have been used to prepare local realizations of pseudo-porosity log. These pseudo-porosity logs are then converted to acoustic impedance to be able to setup synthetic seismograms. The synthetic seismograms will be compared with real ones and then using an accept-reject command, the best local realization will be selected at each grid nodes. This procedure is repeated for all the grid nodes to prepare a 3D estimated model (realization) of porosity.
Therefore, the workflow for the single-step inversion of porosity data will be expressed as follows:
- A random path is selected through the seismic grid.
- At each node, using the original and previously inverted porosity logs, a pseudo-porosity log is simulated
- The simulated porosity log is converted into a impedance log by means of relations that has been previously established between porosity and acoustic impedances at well locations.
- The converted impedance log is convolved with the extracted wavelet to produce a synthetic seismogram.
- The synthetic seismogram is compared with the original seismogram to measure the misfit.
- If the calculated misfit is less than a threshold value, the simulated porosity log will be selected and added to the data set to be used for simulation of future nodes. On the other hand, if the simulated log does not satisfy the maximum allowable misfit, then a new pseudo-porosity log will be simulated until an allowable porosity log is created.
- The above procedure is repeated for all grid nodes of the selected path until the seismic grid is filled with acoustic impedance logs.
The results of application of the above procedure to create 3D realizations of porosity exhibit an acceptable match with real porosity logs.
seismic inversion,Stochastic inversion,Geostatistical simulation,Acoustic impedance,Porosity
https://jesphys.ut.ac.ir/article_35183.html
https://jesphys.ut.ac.ir/article_35183_41fe94ce1a4ac4cc5a7e3adfd7b97798.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Time-variable gravity determination from the GRACE gravity solutions filtered by Tikhonov regularization in Sobolev subspace
51
77
FA
Abdolreza
Safari
Associate Professor, Department of Surveying and Geomatics Engineering, College of Engineering, University of Tehran
16752349
Mohammad Ali
Sharifi
0000-0003-0745-4147
Assistant Professor, Department of Surveying and Geomatics Engineering, College of Engineering, University of Tehran
sharifi@ut.ac.ir
Hamid Reza
Bagheri
M.Sc. Student of Geodesy, Department of Surveying and Geomatics Engineering, College of Engineering, University of Tehran
Yahya
Allahtavakoli
Ph.D. Student of Geodesy,Department of Surveying and Geomatics Engineering, College of Engineering, University of Tehran
10.22059/jesphys.2013.35185
The GRACE mission has provided scientific community Time-variable gravity field solutions with high precision and on a global scale. The GRACE mission was launched on March 2003. This mission consists of two satellites that pursue each other in their orbit. Distance between two satellites in orbit is measured continuously to an accuracy of better than 1 micron using KBR system placed in satellites. As the satellite fly in the gravity field, this distance changes and by monitoring those changes the gravity field can be determined. To reduce non-gravitational accelerations, each satellite has an on-board accelerometer to measure these accelerations (Wahr and Schubert, 2007). Providing profiles of the atmosphere using GPS measurements for gaining knowledge about the atmosphere is another goal of this mission.
One of the products of this mission is GRACE LEVEL-2. This product consists of monthly spherical harmonic coefficients up to degree 120. One application of these coefficients is to determine time-variable gravity field. The time-variable gravity field is then used to solve for the time-variable-mass field (Wahr and schubert, 2007).
A mathematical model for determining the surface density (mass) variations using spherical harmonic coefficients is presented by Wahr et al. (1998). This mathematical model is as follows:
Where , , , , , , , are surface density variations, mean earth density, mean earth radius, Love number of degree , GRACE potential changes, fully normalized Legendre functions, degree and order respectively.
Spherical harmonic coefficients from the GRACE are noisy which increase rapidly with increasing degree of geopotential coefficients. In addition, monthly surface mass variations map shows the presence of long, linear features, commonly referred as stripes (Swenson and Wahr, 2006).
Hence, in different methods of filtering it is tried to solve both problems. Filtering of the GRACE gravity solutions has been studied extensively. For some of the recent contributions we refer to Wahr et al. (1998), Chen et al.(2005), Swenson and Wahr (2006), Kusche (2007), Sasgen et al. (2006), , Swensonand Wahr (2011), Save et.al. (2012).
In this paper, for filtering the GRACE gravity solutions, we propose a new way of determining the surface mass change formula under the assumptions considered in Wahr et al. (1998) by means of Singular Value Expansion of the Newton’s Integral equation as an Inverse Problem.
Let be the potential change caused by just Earth's surface mass change, then:
or in operator form:
Where is an integral operator with kernel . Series expansion of the kernel based on Associated Legendre functions is as follows:
Now, by means of singular value expansion, singular system for this operator is as follows:
where and , , are singular values, right singular vectors, left singular vectors, respectively. In terms of singular value expansion, the surface density variation can be written as follows:
where s are filter coefficients that are determined by regularization methods. In this paper, filter coefficients are determined from regularization methods, such as Truncated SVE, Damped SVE and the Standard and Generalized Tikhonov methods in Sobolev subspace.
The numerical results show a good performance of the method “Generalized Tikhonov in Sobolev subspace”, which effectively reduces the noise and the stripes.
Surface Mass Variations Model,Noise,Singular Value Expansion,Regularization,Generalizaed Tikhonov,Sobolev Subspace
https://jesphys.ut.ac.ir/article_35185.html
https://jesphys.ut.ac.ir/article_35185_8b9949e1c4306b15aac89bbfcd30e572.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Depth and body type estimation of the magnetic anomalies using analytic signal and Euler deconvolution.
79
94
FA
Kamal
Alamdar
Ph.D. Student in Mineral Exploration, Mining, Petroleum and Geophysics Department, Shahrood University of Technology, Iran
kamal.alamdar2@gmail.com
Abolghasem
Kamkare–reouhani
Associte Professor, Mining, Petroleum and Geophysics Department, Shahrood University of Technology, Iran
kamkarrouhani@yahoo.com
Abdolhamid
Ansari
Associte Professor, Department of Mining and Metallurgical Engineering, Yazd University, Iran
a.ansari@mailanator.com
10.22059/jesphys.2013.35187
A variety of semiautomatic methods, based on the use of derivatives of the magnetic anomalies, have been developed for the determination of both of the causative source parameters such as locations of boundaries and depths. One of these techniques is the method of the analytic signal for magnetic anomalies, which was initially used in its complex function form and makes use of the properties of the Hilbert transform. Initially, it was successfully applied on profile data to locate dike like bodies. The method was further developed by Roest et al. (1992) for the interpretation of aeromagnetic maps. Moreover, Bastani and Pedersen (2001) employed the method to estimate many parameters of dike like bodies, including depth, strike, dip, width, and magnetization. Also, Salem et al. (2002) demonstrated the feasibility of the method to locate compact magnetic objects often encountered in environmental applications. The success of the analytic signal method results from the fact that source locations of magnetic anomalies are obtained using only a few assumptions .For example, horizontal positions are estimated by the maxima of the amplitude of the analytic signal (<em>AAS</em>). In addition, depths can be obtained from the shape of the <em>AAS </em>or based on the ratio of the <em>AAS </em>to its higher derivatives. However, a correct estimate of the depth is obtained only when the source corresponds to the chosen model. Several attempts have been made to enable the analytic signal method to estimate both the depth and model type of magnetic sources. Furthermore, a number of automated methods only for source location from 2D (profile) magnetic data have been developed, based on either the local wavenumber. The main advantage of using derived quantities such as the local wavenumber (LW) and amplitude of the analytic signal (AS) is that they are generally independent of source magnetization and dip effects, therefore allowing positional parameters such as depth and horizontal location to be determined more directly than from the magnetic field.
Special function such as Euler deconvolution and analytic signal play an important role in potential field data interpretation particularly in the case of magnetic data. In this paper, a new method is proposed based on the combination of these two functions that can lead to automatic interpretation of 2D and 3D magnetic data. In this method both the depth and type of subsurface body will be estimated simultaneously. The final equation is produced with substitution of the Euler deconvolution derivatives in the analytic signal equation. The proposed method has been applied on synthetic and real magnetic data successfully. Also this method is applied on high-resolution aeromagnetic data from Yigarn plateau in Western Australia in which it enhaned the dykes. This method is applied on a ground magnetic profile in Central Iranian Iron ore in Bafgh and the results was tested using inverse modeling.
Euler,Analytic signal,Potential Field,Yigarn plateau,Central Iran Iron ore,inverse modeling
https://jesphys.ut.ac.ir/article_35187.html
https://jesphys.ut.ac.ir/article_35187_ecf0c4de58edc4278c4e6a8754297d54.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Evaluation of Havir Lake formation causes based on magnetotelluric data
95
110
FA
Behrooz
Oskooi
Assistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran
boskooi2@ut.ac.ir
Safieh
Omidian
Ph.D. Student, Faculty of Geology, University of Sistan and Balouchestan, Iran
omidian@mailanator.com
10.22059/jesphys.2013.35190
Electromagnetic methods are widely used for the study of subsurface resistivity structures. These methods are based on the response of the subsurface structures to electromagnetic fields. The magnetotelluric (MT) method is an electromagnetic technique that uses the natural, time varying electric and magnetic field components measured at right angles to the surface of the earth, to make inferences about the earth’s electrical structure which in turn can be related to the geology tectonics and subsurface conditions. Measurements of the horizontal components of the natural electromagnetic field are used to construct the full complex impedance tensor, Z, as a function of frequency. Using the effective impedance, determinant of apparent resistivities and phases are computed and used for the inversion. Also the apparent resistivity for DET-mode is computed and used for the 2D inversion.
The long- and short-periodic signals originate from ﬂuctuations in the intensity of the solar wind and global lightning activity, respectively. The electromagnetic energy released in discharges, propagates with slight attenuation over large distances in a wave-guide between the ionosphere and Earth’s surface. At large distances from the source this is a plane wave with frequencies from about 10<sup>-5</sup> to 10<sup>5</sup> Hz. The magnetotelluric ﬁelds can penetrate the Earth’s surface and induce telluric currents in the subsurface. The MT method uses simultaneous measurements of natural time variations in the three components of the Earth's magnetic ﬁeld (H<sub>x</sub>, H<sub>y</sub>, and H<sub>z</sub>), and the orthogonal horizontal components of the induced electric ﬁeld (E<sub>x</sub> and E<sub>y</sub>) to obtain the distribution of the electric conductivity in the Earth's interior.
Magnetotelluric studies are important. They contain information about the ﬂuid content and thermal structure, which are the key parameters for deﬁning the rheology of the crust and upper mantle. This method has been proved to be useful for widespread applications. For example, MT is extensively being used in imaging the ﬂuids in subduction zones and volcanic belts, orogenic regions, delineation of ancient and modern subduction zones and lithospheric studies.
1D and 2D inversions are conducted to resolve the conductive structures. We performed 1D inversion of the determinant data using a code from Pedersen (2004) for all sites. Since the quality of the determinant data was acceptable, we performed 2D inversion of the determinant data using a code from Siripunvaraporn and Egbert (2000).
An MT survey was carried out using MTU2000 systems belonged to Upssala University of Sweden. Data are stored on an internal hard disk and are downloaded via a connection. Power is supplied by a 12 V external battery. Three magnetometers and two pairs of non-polariable electrodes are connected to this ﬁve-channel data logger. For the registration of magnetic ﬁeld variations in the range from 10,000 to 0.001 Hz broadband induction coil magnetometers are used. The electric ﬁeld variations are registered by measuring potential differences with non-polarizable electrodes. The experimental set-up includes four electrodes, which are distributed at a distance of 100 m in north–south (E<sub>x</sub>) and east–west (E<sub>y</sub>) direction. They are buried at a depth of about 30 cm and coupling to the soil is improved using water. The ADU logger and magnetometers are located in the centre, whereas the three induction coils are oriented north–south (H<sub>x</sub>), east–west (H<sub>y</sub>) and vertical (H<sub>z</sub>) at a distance of 10 m from the data logger and at least 1 m from electric ﬁeld wires and 5m from every conductive object. The vertical coil was buried to 4/5 of its length and covered by a plastic tube in order to prevent recordings from the inﬂuence of wind.
In this study, the subsurface structure of Havir Lake, in southeast of Damavand volcano has been studied using the Magneotelluric (MT) method. There are two ideas about its generation: activation of Quaternary glaciers due to their movement and melting, and/or a production of Mosha fault. So, we gathered the geological and geophysical evidences related to field work to find a logical reply for this matter. A north-south Magnetotelluric profile was designed in the southern part of the Lake. After acquisition, processing and 1D and 2D inversions of the MT data, with respect to the structural and geological information, a low resistivity body (60 Ohm-m) distinguished in the southern part. Its thickness is about 4000m and starts from a depth nearly of 500m to 4500m. It seems that its existence is due to shear movement of Mosha fault and the debris of glaciers movements. In the northern part of the profile (exactly near the lake) a very high resistive body (1000 Ohm-m) is recognized from the 300m of earth surface with 400m thickness which most probably is the very rigid basement of the lake which probably belongs to inseparable Jiroud and Mobarak formations.
Brittle zone,Central Alborz,Havir lake,magnetotellurics,Mosha fault
https://jesphys.ut.ac.ir/article_35190.html
https://jesphys.ut.ac.ir/article_35190_4b7049ca2cec82e437caa6d8d69495a0.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
A radiative-advective model for estimation of nocturnal cooling in a basin surrounded by topography (Rafsanjan basin)
111
126
FA
Niloofar
Akbarimoghadam
M.Sc. Student in Meteorology, Space Physics Department, Institute of Geophysics, University of Tehran, Iran
Abbas Ali
Aliakbari-Bidokhti
Professor, Space Physics Department, Institute of Geophysics, University of Tehran, Iran
bidokhti3@ut.ac.ir
Parviz
Irannejad
Associate Professor, Space Physics Department, Institute of Geophysics, University of Tehran, Iran
piran@yahoo.com
10.22059/jesphys.2013.35192
In the nighttime, drainage flow occurs along the basin sideslope and advects cold air to the boundary layer over the basin bottom (BBL), intensifying the cooling rate of the layer. A nocturnal cold air lake develops in the basin, attaining a depth nearly equal to the topographical depth of the basin. Heat budget analysis of the whole basin surface shows that net radiative flux closely balances with sensible heat flux and ground heat conduction.
In the daytime, the BBL is warmed not only by sensible heat flux from the surface of the basin bottom, but also by local subsidence heating. This local subsidence above the basin bottom depresses development of the convective boundary layer until the nocturnal cold air lake vanishes completely. The subsidence velocity increases with time after sunrise. Over the whole basin surface, net radiative flux closely balances with sensible and latent heat fluxes.
Cooling in an enclosed basin surrounded by topography, is a function of different factors and most notably the local processes of radiation and advection due to drainage flows. In this study, radiative cooling with an air parcel model for down-slope winds with zero latent heat flux assumption, are used to build a numerical scheme for estimating nocturnal cooling in such basin. The meteorological condition is assumed to be calm which is often the case for topographically surrounded basins for the area. The model requires a prescribed potential lapse rate during the night. For validation of the model the data of the Aizu basin in Japan with a good set of measurements is used.
For typical model basins, the dependence of the nocturnal cooling on topographic parameters are obtained as follows: (i) The governing parameters are the depth of the basin and a shape parameter. The conical basin with a small shape parameter has more air cooling and a weaker slope wind than a flat bottom basin with a large shape parameter; (ii) Mean sensible heat flux during the night is almost proportional to a cube root of the depth of the basin, but little affected by the shape parameter.
Sensitivity to radiational condition, the thermal constant of the ground, and surface roughness are also examined in this study.
The results show that for a conical shaped surrounding the temperature drop during the night is more than a case with bowel like shape with the same depth. Also as the depth of the basin increases this temperature drop is large. Also dryer surface of the basin leads to larger radiative cooling and hence lower temperature in comparison to the wetter case. It is also found that the slope of the surrounding slopes does not affect the cooling rate as long as the depth of the basin is kept constant.
The results of the model also show that often (> 65% of the times) the morning temperature of the basin surface can reach zero degrees centigrade if the evening temperature is about 9.5 degrees centigrade or less. Thai can be used to issue warnings to farmers in such areas in order to avoid frost damage to crops.
The model is used for Rafsanjan city to predict nocturnal temperature drop in spring seasons. This area with vast pestasous farms is prone to frost damage in spring time.
Night cooling,subsidence,Basin,Cold pool,Sensible heat,A radiative-advective model
https://jesphys.ut.ac.ir/article_35192.html
https://jesphys.ut.ac.ir/article_35192_94ecb1c9d0a4930590e94775a933f7b4.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
The trend of the Siberian high pressure and its impacts on the meteorological fields during 1948-2008
127
138
FA
Masoomeh
Ahmadi-hojat
Ph.D. Student of Meteorology, Department of Meteorology, Science and Research Branch, Islamic Azad University of Tehran, Iran
ahmadihojat@irimo.ir
Farhang
Ahmadi-Givi
Assistant Professor, Space Physics Department, Institute of Geophysics, University of Tehran, Iran
ahmadig@ut.ac.ir
Sohrab
Hajjam
Assistant Professor, Department of Meteorology, Science and Research Branch, Islamic Azad University of Tehran, Iran
shajam@ut.ac.ir
10.22059/jesphys.2013.35193
One of the most important atmospheric systems in the Eurasian continent during the Northern Hemisphere winter season is the Siberian high pressure. In this study, using the NCEP/NCAR reanalysis data for winters of 1948-2008, trend of changes in the intensity of the high system center and its effects on some meteorological fields are investigated. The results show that the 2m height Mean Temperature in central Siberian high region were -17.7 °C during the study period and the Mean Sea Level Pressure (MSLP) were over 1030hPa. The MSLP had a minor linear positive trend of 1.10hPa/dec (hPa per decade) at the beginning of the 60-years period. Also, a weak negative linear trend of -0.12hPa/dec occurs in early 1970s. There is a meaningful correlation (-0.46) between the Mean Temperature averaged on influence area of the central Siberian high and the Siberian High Index (SHI) during the study period, as in most cases enhancement (weakening) of the high center accompanied with cooling (warming) in the region. Calculations indicate a trend of 0.13 °C/dec warming for the area. The results show that the pressure gradient causes strong north monsoonal currents and thus cold advection toward far-east or east-Asia winter monsoon. Calculated correlation coefficients between the SHI and some meteorology fields indicate that the enhanced Siberian high provides suitable conditions for cyclogenesis over the Mediterranean Sea and development of warm advection from north Africa to east Mediterranean and then to north Europe. Another result indicates that when anti-cyclones over the Siberia form and develop, they act as a barrier to eastward movement of the extra-tropical cyclones. This leads the systems to move to higher latitudes and thus fewer cyclonic system pass from the Siberia and this reduce warm advection over the Siberia.
Furthermore, calculated tele-connection correlations show that the system affects atmospheric variables beyond its established region. Enhancement of pressure in the system normally leads to a strong pressure gradient between east Siberian high and west Aleutian high.
Correlation coefficients between the SHI and some meteorological fields in the extra-tropical regions of the north hemisphere indicate that the Siberian high could exert impacts on meteorology fields beyond its source region. Examples include a strong relationship between the SHI and the East Asian winter monsoon, enhancement of the suitable conditions for cyclogenesis over the Mediterranean Sea and warm advection intensification in North Africa toward the east Mediterranean Sea and north Europe. Beyond these features, there is intensification of the subtropical jet over the East Asia and the eastern Mediterranean Sea as well as the existence of a wave train in the form of positive and negative correlation coefficients between the SHI and the Geopotential height (GPH) field in middle and upper troposphere. These findings suggest that the Siberian high should not be considered simply as a local low-level phenomenon, and it exerts significant impacts on middle and upper-level circulations.
Siberian high pressure index,Linear trend,teleconnection,East Asian winter monsoon,subtropical jet stream
https://jesphys.ut.ac.ir/article_35193.html
https://jesphys.ut.ac.ir/article_35193_5d030eef7247b5d6eb7593d2ff2efa85.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Identification of mass composition of cosmic rays considering extensive air shower’s electron and muon component with highest energies
139
143
FA
Somayyeh
Soomandar
M.Sc. in Physics, Factuly of Physics, Shahid Bahonar University, Kerman, Iran
phsoomandar@gmail.com
Seyyed Jalileddin
Fatemi S
Professor, Factuly of Physics, Shahid Bahonar University, Kerman, Iran
Saeed
Doostmohammadi
Ph.D. Student of Physics, Factuly of Physics, Shahid Bahonar University, Kerman, Iran
10.22059/jesphys.2013.35195
The phenomena of Extensive Air Shower (EAS) are produced by the collision of primary cosmic rays (CR) with Energy more than eV with the atmospheric molecules. As a result the electron and muon components (cascades) of EAS develop through the air.
The study of such cascades gives important information about the primary CR mass composition as well as its astrophysical origin models. One of the EAS detection methods is by the ground arrays of electron and muon detectors; the data of Yakutsk EAS array located in Russia have been used for primary CR with energy more than eV. In the catalogue of world’s data, the EAS parameters such as electron density, muon density, R shower core distance, E primary energy and arrival directions (Zenith angel ѳ, Azimuth angle Φ) of each shower is given.
In this search the different parameters of EAS such as the age parameter, shower size () and total number of muons () are used as mass composition discriminators. The total number of muons and electrons in the shower have been calculated using the lateral distribution functions (LED) of electron and muon of Nishimura-Kamata-Griesen(NKG) formula.
The first sensitive parameter to use is where its dependency on EAS energy is studied. It is expected that the ratio should increase from primary Gamma to proton and then in turn to heavy nucleus. The dependency of calculated on energy shows an increase above that could be due to LPM effect of Gamma primaries. At highest energies or the increase of Mass composition above this energy, which because of its high increase of , heavier mass composition above is suggested. The second parameter for investigating EAS mass composition is the EAS age parameter which is also calculated by using LDF of NKG formula. Higher age has flatter electron LDF or higher mass composition. Again it is observed that the age is increased (indicating higher mass) above.
The last main parameter to investigate mass composition is the dependency of on . The calculated experimental results have been compared with the CORSIKA simulation work for Gamma, Proton(P) and iron(Fe) cosmic ray primaries. The results suggest a mixed P-Fe composition for energies above and Fe primaries above. In conclusion the study of EAS age and on E, also dependency of on and it<sup>,</sup>s comparison with the simulation work consistently show an increase of mass composition of cosmic ray above its primary energy .
Also because of the increased mass composition of CR (higher charge) it means more deflection of CR in the Galactic magnetic field. Therefore the particles of higher energies (above) are more confined to the galaxy than the lower energies so their sources may be of galactic than extragalactic origin.
The results of this search also give a light on the CR Astrophysical origin models named top-down models (10-50 percent of gamma primaries) and bottom-up models (less than 1 percent photons). (The low percent of Gamma primaries is not in the favor of top-down models scenario of no acceleration).
Cosmic ray,Extensive Air Shower,Muon component,Electron component
https://jesphys.ut.ac.ir/article_35195.html
https://jesphys.ut.ac.ir/article_35195_817bd1bec12e72e8d7ad2d53a1bf3ef0.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Short range precipitation forecasts evaluation of WRF model over IRAN
145
170
FA
Farahnaz
Taghavi
0000-0003-4399-882X
Assistant Professor, Space Physics Department, Institute of Geophysics, University of Tehran, Iran
ftaghavi@ut.ac.ir
Abolfazl
Neyestani
M.Sc. Student in Meteorology, Institute of Geophysics, University of Tehran, Iran
Sarmad
Ghader
0000-0001-9666-5493
Associate Professor, Space Physics Department, Institute of Geophysics, University of Tehran, Iran
sghader@ut.ac.ir
10.22059/jesphys.2013.35196
Iran has a complex topography and it consists of rugged, mountainous rims surrounding high interior basins. Because of this condition, in some cases the NWP output has a significant error from mesoscale variations induced by the diverse topography.
Iran, covering an area of about 1,648,000 km<sup>2</sup>, is located in the southwest of Asia approximately between 25° and 40° N and 44° and 64° E. This is predominately a semi-arid to arid region surrounded by Caspian Sea to the north and Persian Gulf to the south and crossed by the impressive Zagros and Alborz Mountains. However, for a next-generation mesoscale forecast model (the Advanced Research Weather Research and Forecasting model, ARW) developed by NCAR (Skamarock et al. 2005), the performance of this model employed in the operational forecasts over Iran is not fully tested.
A few previous model studies (Evans and Smith 2001; Evans et al. 2004; Zaitchik et al. 2007; Marcella and Eltahir 2008; Xu et al. 2009) provided some interesting results for the basic weather simulation in SWA using a regional climate model [the second-generation National Center for Atmospheric Research (NCAR) Regional Climate Model (RegCM2)] or the fifth generation Pennsylvania State University–NCAR Mesoscale Model (MM5) model. They pointed out that the regional model has difficulty producing an accurate simulation of meteorological variables in certain sub regions this includes an accurate description of storm tracks, topographic interactions, and atmospheric stability.
On the other hand, verification is a critical component of the development and use of forecasting systems. Ideally, verification should play a role in monitoring the quality of forecasts, providing feedback to developers and forecasters to help improve forecasts, and provide meaningful information to forecast users to apply in their decision-making processes.
One of the purposes, in this study is to evaluate the performance of the WRF model in the complex terrain of Iran. This evaluation primarily concentrates on the precipitation forecasts. In this paper, Real-time gridded 24h and 48-h precipitation forecasts from NCAR models (the Advanced Research Weather Research and Forecasting model; WRF model) are verified over Iran from one to 28 February 2007. Network has 209 * 194 points, which have the center point 54E longitude and 32N latitude and the horizontal resolution of main domain is 21km. The observed precipitation data were taken from the FNL (Final Operational Global Analysis data) reanalysis data with horizontal resolution 1°×1°.
All forecasts are mapped to a 21km latitude–longitude grid and have been verified against an operational precipitation analysis (more than 100 rain gauges), mapped to the same grid. In this study, first we will describe the forecasting errors of the WRFARW model for precipitation. Then, we will introduce some techniques for evaluating the forecasts. These techniques are particularly designed to examine the difficult features of the precipitation fields.
Forecasting weather variables by numerical models has a discrete nature. Assessment of discrete variables such as precipitation is examined by discrete evaluation techniques and evaluation of quantitative precipitation forecasts (QPF) is a challenging matter because of the noisy, discontinuous and non-normal nature of precipitation.
Common method for evaluation of quantitative precipitation forecast is a categorical procedure that is essentially based on contingency tables. X observations and Y forecasts convert to dichotomous events based on both of them more than a constant precipitation rate threshold ‘u’. Then, the behavior of binary categorical verification scores from contingency tables in some rainfall thresholds has been evaluated. Common measures of the binary events evaluation are hit rate, false alarm ratio, false alarm rate, skill scores, correct ratio etc.
Some measures from Signal Detection Theory (SDT) such as area under the ROC curve and discrimination distance is used. SDT offers two broad advantages. Firstly, it provide a means of assessing the performance of a forecasting system that distinguishes between the intrinsic discrimination capacity and the decision threshold of the system. The main analysis tool that accomplishes this is the relative operating characteristic (ROC) that is a graph of hit rate against false alarm rate as <em>u </em>varies, with false alarm rate plotted as the <em>X</em>-axis and hit rate as the <em>Y </em>–axis.Secondly, SDT provides a framework within which other methods of assessing binary forecasting performance can be analyzed and evaluated.
Due to the complexity of the Iranian plateau and lack of knowledge in the estimation of the physical processes in this area, forecasters should have greater awareness of these limitations of the model when forecasting in this region.
Examining the calculated scores for QPF, show that the WRF model correctly estimates the general pattern of precipitation bands but there are problems in the actual rain value. Moreover, skill scores for different thresholds on total investigated area for one month period and in the intensity activity of synoptic systems days show good performance of the WRF model for estimating precipitation in most areas. In addition, using ROC curves gives a measure of performance in all thresholds. For 0.1mm rainfall threshold at selected synoptic stations, model estimates rainfall frequency properly and skill scores is desirable, although precipitation rate estimates still have problems. In addition, the verification scores of model in estimation of quantitative precipitation in 24 forecasts are better than 48h forecasts. The results suggest that improvements in initialization may be as important, or more so, than improvements in the physics for the land surface processes.
Forecast evaluation,WRF,precipitation,contingency table,Iran
https://jesphys.ut.ac.ir/article_35196.html
https://jesphys.ut.ac.ir/article_35196_d2bc82e987c8bbceb504ce6f5f85a424.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
The relation between two patterns North Sea–Caspian pattern (NCP) and East Europe– Northeast Iran (ENEI) with number of extreme cold temperatures in Iran during cold seasons
171
186
FA
Seyyed Abolfazl
Masoodian
0000-0001-6227-6713
Professor, Department of Climatology, Faculty of Geographical Sciences and Planning, University of Isfahan, Iran
porcista@yahoo.ie
Mohammad
Darand
Assistance Professor, Department of Climatology, Faculty of Natural Resources, University of Kurdistan, Iran
10.22059/jesphys.2013.35197
One of the effects of climate change is the possible increase in both frequency and intensity of extreme weather events. Extreme weather and climate events have a major impact on ecosystems and human society due to their severity and the fact that they often occur unexpectedly. In warmer climates and during transition seasons, cold extremes have agricultural impacts that are manifested in the damage of crops due to frost. The identification of teleconnections and the analysis of their impact on the atmospheric circulation can be very useful for the understanding of anomalous events at many regions of the planet when one assumes that local forcing may influence the atmosphere circulation at remote locations. Teleconnection patterns are simultaneous correlations in the fluctuations of large scale atmospheric parameters at points on the Earth that are wide apart. The effect of these patterns could be significant throughout the dominant modes of the atmospheric variability. Teleconnection patterns reflect large-scale changes in the atmospheric wave and jet stream patterns, and influence temperature intensity over vast areas. Thus, they are often the culprit responsible for abnormal weather patterns occurring simultaneously over seemingly vast distances. The objective on this study is to clarify whether the frequency of extreme cold temperatures occurrence in Iran during cold period have correlation with North Sea–Caspian pattern (NCP) and East Europe– Northeast Iran (ENEI) .
In order to study the relation between the monthly numbers of extreme cold temperature day number of Iran during cold period with North Sea–Caspian pattern (NCP) and East Europe– Northeast Iran (ENEI), temperature data of 663 synoptic and climatic stations during 1/1/1962 to 31/12/2004 has been used. Then temperature on 15×15 kilometer pixels by using Kriging method interpolated over Iran. A matrix that was 7853×7187 has been created that for this period (7853) located on the rows and pixels on the columns (7187). There is no single definition of what constitutes an extreme event. In defining an extreme event some factors that may be taken into consideration include its magnitude, which involves the notion of the exceeding a threshold. The most general and simple, and so more wide used method for defining an extreme event of temperature is based on the definition of frequency of occurrence of the event. In this study, at first the extreme cold days during cold period recognized with Fumiaki Index. Then for each month during cold period, the number of extreme cold temperature occurrence was calculated. Monthly data during cold period of North Sea–Caspian pattern (NCP) and East Europe– Northeast Iran pattern during study period extracted from NCEP/NCAR data site of United States National Oceanic and aAtmospheric Center. The correlation between the monthly numbers of extreme cold temperature days in Iran during the cold period with North Sea–Caspian pattern (NCP) and East Europe– Northeast Iran (ENEI) was calculated.
After extracting the number of extreme cold day’s occurrence for each month during the cold period of the year during the study period, the correlation was calculated with North Sea–Caspian pattern (NCP) and East Europe– Northeast Iran (ENEI). Also, the magnitude of explanation coefficient has been calculated. The map of correlation and explanation coefficient are showed in figures 2 to 6. There is a significant correlation between monthly numbers of extreme cold days during cold period with NCP and ENEI at the 95% confidence level.
The results showed that there is a positive correlation between the monthly numbers of extreme cold temperature days in Iran during cold period with an North Sea–Caspian pattern. The positive phase results in increase of cold extreme days in western part of Iran. The positive phase of North Sea–Caspian pattern (NCP) accompany with positive anomaly of the 500 hPa geopotential height level in the North Sea and negative anomaly in Caspian Sea. This indicates in cold air advection towards Iran especially in the western parts. In January, the correlation for 95% of Iran area is significant and positive. The highest explained coefficient is observed for the west and northern part of Iran.
Extreme cold temperature,North Sea–Caspian pattern,East Europe– Northeast Iran
https://jesphys.ut.ac.ir/article_35197.html
https://jesphys.ut.ac.ir/article_35197_3f9cbc08cf4b783a1afa538ab44b926d.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Evaluations of Tehran percipitation using meteorological radars, based on Z-R method during 2010 and 2011
187
204
FA
Amir
Mohammadiha
P.h.D Student in Meteorology, Hormozgan University, Iran
Mohammad Hossein
Memarian
Assistant Professor, Faculty of Physics, Yazd University, Iran
Mohammad
Reyhani Parvari
M.S.c in Meteorology, I.R. IRAN Meteorological Organization, Iran
10.22059/jesphys.2013.35198
Meteorological radars estimate the precipitation using a reflection-precipitation relation as <em>Z=aR<sup>b</sup></em> with <em>a, b c</em>oefficients<em>. </em>These coefficients will change from one precipitation to the other one. For the Tehran radars they are considered as a=200 and b=1.6. These values are proper when we have moderate precipitations. This assumption causes an error in the estimation of different precipitation which is done by radars. In order to know how the evaluations of rain are done by Tehran radars, we consider the amount of rain in three chosen rain recording stations which are Qom, Kooshk Nosrat and Pakdasht.
Our aim of these studies is to evaluate the estimation of the Tehran meteorological radars for different amounts of precipitations. Therefore three intervals, including 11/1/2010-11/4/2010, 1/8/2011-1/11/2011 and 1/15/2011-1/18/2011 have been chosen in which the precipitations have been reported.
The primary results indicate that Tehran meteorological radars estimate the amount of the precipitation less than the amounts which are registered by the rain – gauge of meteorological stations. These differences become more when the precipitation rates are considered. To amend the estimated values of the precipitation which are taken by Tehran radars, these amounts are evaluated by data which are given by rain recovery stations and finally are rescaled by the logarithmic relation Z-R. Using this relation new values for the concerned coefficients are obtained for different dates of precipitation. After depicting the linear regression equation and getting new coefficients to estimate the intensity of rain, we can access to a plot which indicates how the precipitation will change by employing the required corrections. We are also able to plot the precipitation changes in the used rain recording stations. The results indicate that we achieve good progress of estimations if we use new coefficients. Comparison of the gathered rain data by radars with the data of the rain recording stations shows a 40% agreement between them. If we use new coefficients, this agreement will increase to 90% which confirm that these new coefficients are appropriate in the evaluation equation of precipitation.
Results of this study illustrated that the error of the radar lies in two factors, the first one is that the raindrops absorb some of the radar signals and this causes that return signals to be reduced; therefore more rainfall intensity, weakens the wave reflections The second reason is that the reflectivity values are proportion to sixth power of raindrops diameters and this means that the reflection of radar waves is affected by the droplet size and the larger droplet diameter increases the reflection, while the rainfall measured by rain gauge is not affected by droplet size, and value of the recorded rainfall or precipitation volume is not affected. In other words, the radar estimates precipitation more than the actual amount occurring in rainfall with larger drop size while estimates are less than the actual amount of precipitation in rainfall with smaller drop size.
Rainfall,Meteorological radar,Reflectivity,Evaluation,Rain gauge,Automatic rain writing,TEHRAN
https://jesphys.ut.ac.ir/article_35198.html
https://jesphys.ut.ac.ir/article_35198_e39c9cdd7623ced41e7211ac3191b864.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Validation of global ocean tide models in coastal regions of Persian Gulf and Oman Sea using coastal tide gauge observations
205
214
FA
Hosein
Zomorodian
Professor, Department of Geophysics, Science and Research Branch, Islamic Azad University, Tehran, Iran
hzomorod2@hotmail.com
Alireza
Ardalan
0000-0001-5549-3189
Professor, Department of Surveying and Geomatics Engineering, University of Tehran, Tehran, Iran
ardalan@ut.ac.ir
Nasrin
Khodabakhsh-Shahrestani
M.Sc. in Geophysics, Department of Geophysics (Gravimetry), Science and Research Branch, Islamic Azad University, Tehran, Iran
10.22059/jesphys.2013.35199
In recent years, several ocean tide models have been used to calculate tides using data of satellite and tide gauges data. Global ocean tide models have many applications in various sciences such as geophysics, geology and geodesy and oceanography. Considering the existence of many models, a quantitative evaluation of research, and ranking and selecting the best model of the global ocean tide Models is considered as an important objective
The purpose of this study is validation of global ocean tide models including, FES2004, FES99, NAO.99b, TPXO6.2, TPXO7.1, in the Persian Gulf and Oman sea and selecting the optimal model which can be used to determine the characteristics of the tide in these regions. Since most of the global tide models are designed for deep sea, so the models evaluated in this study have been chosen not only suitable to the latitude and the longitude of the Persian Gulf and Oman Sea but also to be used in the shallow sea as well Oman Sea.
In order to evaluate the models, the tidal analysis results obtaind from the models are compared with the tidal analysis based on tide gauge observations in the Persian Gulf and Oman Sea (Jask, Chahbahar,Shahid Rajaee,.Bushehr,Emam Hassan,Kangan).For the tidal analysis based on global ocean tide models we must test the models. To test the models TPXO6.2 and TPXO7.1, we have to use special software called Tide Model Driver (TMD). The TMD package contains scripted function for use in batch-mode Matlab processing. The input data are the geographical latitudes and longitudes of the studied area as well as the selected tidal components. The output data will be the amplitudes and the phases of the tidal components. The models (FES2004, FES99, and NAO99b) open in Matlab and need a short program in Matlab for determining the geographical situation in the area of the Persian Gulf and Oman Sea. The input data are the geographical latitudes and longitudes of the studied area as well as the selected tidal components. The output data will be the amplitudes and the phases of the tidal components.
In this research the tidal analysis based on tide gauges is conducted in two ways: tidal modeling and utilizing the results of IOS software. The tidal modeling was used with the Fourier sine and cosine series expansion and least squares.
In both ways it was resulted that the major section of elevation data at the stations related to main tidal components (K<sub>1</sub>, O<sub>1</sub>, M<sub>2</sub>, S<sub>2</sub>) and the largest amplitude observed in stations is related to M<sub>2</sub>. In this study the results of five models of the global tide ocean compared with the results acquired by tide gauges using software IOS and at all stations using several different statistical method .The statistical are.
- Amplitude root mean square
- Rss of amplitude root mean square
- Vector difference root mean squars
- Rss Vector difference root mean squars
Assesment of root mean square of amplitude of tidal components that are drived by the models in this research are compared with the tide gauges results (using both two manner, software and tidal modeling) showed that the root mean squares of all estimated amplitudes of models except FES99 is less than dm. The results of statisstical analysis showed that the best agreement with the tide gauge results with the tide results evaluated by models is corresponding to Jask and Chabahar which are closer the open sea; and results of the FES2004 model has the best agreement with the tide gauge results in the Persian Gulf and Oman Sea.
Results of FES2004 model which are compared with results of tidal gauge showed that this model has the lowest Rss for RMS for amplitude (8.5843cm). Results of FES2004 model which compared with results of IOS software showed that this model has the lowest Rss for RMS for amplitude (8.795 cm), also the FES2004 model has the lowest Rss for RMS differential vector (9<em>.</em>378 cm).
tide,Global ocean tide models,Tidal component,Mean Sea Level,Tide gauge
https://jesphys.ut.ac.ir/article_35199.html
https://jesphys.ut.ac.ir/article_35199_686a5056e3b833a0cb2c7be588306600.pdf
Institute of Geophysics, University of Tehran
Journal of the Earth and Space Physics
2538-371X
2538-3906
39
2
2013
08
23
Finite element modeling of deformation field induced by dip and strike slip fault coseismic activity
215
225
FA
Bijan
Shoorcheh
Ph.D. Student of Geodesy, Department of Surveying and Geomatics engineering, University College of Engineering, University of Tehran, Iran
b.shoorcheh@ut.ac.ir
Mehdi
Motagh
Assistant Professor, Department of Surveying and Geomatics engineering, Center of Excellence in Surveying Engineering and Disaster Management, University College of Engineering, University of Tehran, Iran
Mohammad Ali
Sharifi
0000-0003-0745-4147
Assistant Professor, Department of Surveying and Geomatics engineering, Center of Excellence in Surveying Engineering and Disaster Management, University College of Engineering, University of Tehran, Iran
sharifi@ut.ac.ir
10.22059/jesphys.2013.35200
Many earthquakes occur in Iran every year and some of these earthquakes cause loss of life and property. Consequently earthquake is one of challenging topics not only in Iran but also in other active tectonic regions in the world. Investigating the mechanism of earthquake as a natural disaster is the first and important step. To study earthquakes, different information such as geometry and behavior of active faults, as well as the mechanical properties of the earth’s upper most layers, are required. Geometric and rheology properties of earth’s layers as well as details of the contemporary strain, temperature and stress have increased significantly over the past decade. Furthermore thanks to availability of Global Navigation Systems (GNSS), like GPS, modern space geodesy data processing and new tools like PS-InSAR that provide unforeseen spatial coverage of precise observations of the Earth’s surface deformations. The only processing approach that composes all geometrical and physical complexities is Finite Element Modeling (FEM). The first step in using FEM is to examine its capabilities.
In this paper deformation field of a dip slip (normal and reverse) and a strike slip fault (left lateral) in linear homogenous isotropic elastic medium by means of 2D and 3D Finite Element Method (FEM) has been investigated. By means of FEM, the complexity of mechanism of a fault related disaster for determination of precise Green operator and solution of a reverse problem for extraction of fault slip rate can be modeled. As a sample, we apply contact elements and develop a frictionless fault surface and then deformation field of dip and strike slip faults for one meter of slip for each side of fault surface. Fault top lines for dip and strike slip faults are assumed to be on the ground. The dimensions of semi infinite medium for dip and strike slip faults are respectively 1000*500 and 1000*3000*120 km and these dimensions relative to fault’s dimensions are large. FEM deformation field are compared to Okada analytical model (an analytical model). The comparison shows that there is a good agreement between FEM and the analytical model. Our procedure can be summarized as follows:
1- 2D geometrical modeling of 90, 70 and 25 degree dip slip faults and 3D vertical strike slip fault.
2- Meshing of medium and assign material properties (linear homogenous isotropic elastic).
3- Apply boundary conditions by horizontal and vertical displacement vectors.
4- Determination of horizontal and vertical displacement vectors on the ground by means of FEM.
5- Comparison of analytical (Okada model) and FEM results and computation of Root Mean Square (RMS) as an efficiency test of results.
Dip slip fault,Strike slip fault,Deformation field,finite element method
https://jesphys.ut.ac.ir/article_35200.html
https://jesphys.ut.ac.ir/article_35200_e56c1f7729e5aaa30e9e4893e3a26393.pdf