filter by: Publication Year
(Descending) Articles
Parvaresh, F. ,
Sobhani, R. ,
Abdollahi, A. ,
Bagherian, J. ,
Jafari, F. ,
Khatami bidgoli, M. IEEE Transactions on Information Theory (00189448) 71(6)pp. 4156-4166
In order to overcome the challenges caused by flash memories and also to protect against errors related to reading information stored in DNA molecules in the shotgun sequencing method, the rank modulation method has been proposed. In the rank modulation framework, codewords are permutations. In this paper, we study the largest size P(n, d) of permutation codes of length n, i.e., subsets of the set Sn of all permutations on {1, . . . , n} with the minimum distance at least d ∈ {1, . . . ,(Formula presented)} under the Kendall τ-metric. By presenting an algorithm and two theorems, we improve the known lower and upper bounds for P(n,d) . In particular, we show that P(n,d)=4 for all n ≥ 6 and (Formula presented) (Formula presented) < d ≤ (Formula presented) (Formula presented) . Additionally, we prove that for any prime number n and integer r ≤ − (Formula presented) , P(n,3) ≤ (n-1)! (Formula presented) (Formula presented) . This result greatly improves the upper bound of P(n,3) for all primes n ≥ 37 . © 1963-2012 IEEE.
Abdollahi, A. ,
Bagherian, J. ,
Jafari, F. ,
Khatami bidgoli, M. ,
Parvaresh, F. ,
Sobhani, R. Cryptography and Communications (19362447) 17(4)pp. 1075-1091
In this paper, we study the largest size A(n, d) of permutation codes of length n, i.e., subsets of the set Sn of all permutations on n letters with the minimum distance at least d under the Hamming metric. In Abdollahi et al. (Cryptogr. Commun. 15, 891–903 2023) we have developed a method using the representation theory of symmetric groups to find upper bounds on the size of permutation codes in Sn with the minimum distance of d under the Kendall τ-metric. The latter method is used for the permutation codes under the metric induced by Cayley graphs of Sn. Since the metric induced by any Cayley graph of Sn is not equivalent to the Hamming metric, we can not use the method for the Hamming metric. In this paper we find a trick by which we can again use the method to find upper bounds for A(n,2t+1). We present three practical results that prove the non-existence of perfect 2-error-correcting codes in Sn under the Hamming metric for numerous values of n. Specifically, we prove that 91 and 907 are the only values for n≤1000 for which Sn may contain a perfect 2-error-correcting code under the Hamming metric. Additionally, we prove that for any integer n such that n2-n+2 is divisible by a prime exceeding n-⌊n7⌋, (Formula presented.) The result improves the known upper bounds of A(n, 5) for all integers n≥35 such that n2-n+2 is divisible by a prime exceeding n-⌊n7⌋. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
IEEE Communications Letters (10897798) (7)
Spatially coupled low-density parity-check convolutional codes (SC-LDPC-CCs) are a class of capacity approaching LDPC codes with low message recovery latency when a sliding window decoding is used. Recently, an edge spreading (unwrapping) approach has been proposed to generate a class of array-based SC-LDPC-CCs by partitioning a spreading matrix into a number of components. Applying the spreading matrix E as the exponent matrix of a QC-LDPC code, the relationship of 4-cycles between E and the corresponding SC-LDPC-CCs is investigated in two cases, in this letter. Then, by defining a new class of integer finite sequences, called good sequences, a class of 4-cycle free SC-LDPC-CCs is constructed by an explicit simple method that can achieve (in most cases) the minimum coupling widths. The constructed codes enjoy a relative advantage in flexibility in rate and lengths, simple constructions, and minimal short-cycle distributions, which can show themselves in improving the bit-error-rate performances. © 1997-2012 IEEE.
Wireless Networks (10220038) (4)
In this paper, we delve into the intricacies of a network architecture enabled by two layers of caches, where users receive their requested content via intermediate helpers connected to a central server. While coded caching in a two-layer hierarchical model has previously demonstrated its potential to enhance data rates when cache capacities are uniform and no coordination exists between users, our work takes a leap further. We introduce the dimension of heterogeneous cache sizes among both the helpers and users, addressing scenarios where the number of popular files can be either less or more than the number of users within the network. Leveraging a recently proposed modified coded caching scheme, combined with a zero-padding technique, we present novel results on data rates, supported by an illustrative example. Our contribution extends to the formulation of two distinct coded schemes within the hierarchical scenario. Furthermore, we optimize the proportions of files and memories allocated to each scheme, facilitating data transfer efficiency and then derive the lower bound for the total rate. Moreover, we demonstrate that the total rate achieved by the proposed heterogeneous approach is lower than that of a homogeneous network with caches equivalent to the minimum cache size present in the homogeneous network. However, it is more than a homogeneous network with the similar average cache size. In addition, we illustrate by proper selection of the proportions of files and memories allocated to each scheme, we can decrease the performance degradation due to the heterogeneity of the network. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
IEEE Transactions on Instrumentation and Measurement (00189456) 73pp. 1-9
High impedance fault (HIF) detection is a problem in distribution power systems protection. Detection of these faults by overcurrent protection relays is difficult or impossible. Therefore, system recovery is postponed, and individuals may be exposed to the risk of electrical shock. This article defines a family of HIF current-time patterns with different ignition angles. To detect the HIF current, the proposed detection method searches in its family of HIF patterns and finds the pattern that has the highest correlation to the HIF current. Based on the measured correlations, a time-domain algorithm is proposed for the HIF current. If the highest correlation between the family of HIF patterns and the measured current signal is above a given threshold value, then the fault is detected as an HIF. Verified by simulations and experimental results, the proposed method can correctly detect HIF within one power system cycle. © 1963-2012 IEEE.
IEEE Transactions on Intelligent Transportation Systems (15249050) (12)
In traffic sign recognition tasks, recognition of road signs by observing synthetic reference images is a human-like ability that can be performed by one-shot learning algorithms. One-shot object recognition is a challenging task for deep neural networks in which a deep model classifies query examples based on support images. It becomes more difficult when there is a domain shift between support and query samples. The generalization of a deep model on an unknown domain with different distributions is another problematic task in on-shot recognition. This work introduces a novel deep network named SeqNet to overcome the aforementioned problems. To the best of our knowledge, this work outperforms all state-of-the-art models in one-shot traffic sign recognition and one-shot logo identification by superior results. Our proposed SeqNet model generalizes to unseen domains without further model fine-tuning on the test data. Also, we show how using transferred knowledge from an irrelevant but large domain could reduce the network parameters that result in model size reduction. By utilizing the power of transferred knowledge from a large deep model the SeqNet becomes smaller and has about 6X fewer parameters than its competitors. The smaller size of the SeqNet architecture enables it to be used in resource-constrained devices in many applications such as smart vehicles. The experimental results depict that our proposed SeqNet performance is ameliorated by large margins, with up to 20% accuracy for one-shot classification and 30% area under the curve (AUC) for image retrieval tasks. © 2000-2011 IEEE.
Etzion, Tuvi ,
Siegel, Paul H. ,
Kiah, Han Mao ,
Mahdavifar, Hessam ,
Parvaresh, F. IEEE Journal on Selected Areas in Information Theory (26418770)
Abdollahi, A. ,
Bagherian, J. ,
Jafari, F. ,
Khatami bidgoli, M. ,
Parvaresh, F. ,
Sobhani, R. Cryptography and Communications (19362447) 15(5)pp. 891-903
We give two methods that are based on the representation theory of symmetric groups to study the largest size P(n, d) of permutation codes of length n, i.e., subsets of the set Sn of all permutations on { 1 , ⋯ , n} with the minimum distance (at least) d under the Kendall τ -metric. The first method is an integer programming problem obtained from the transitive actions of Sn . The second method can be applied to refute the existence of perfect codes in Sn . Applying these methods, we reduce the known upper bound (n- 1) ! - 1 for P(n, 3) to (n-1)!-⌈n3⌉+2≤(n-1)!-2 , whenever n≥ 11 is prime. If n= 6 , 7, 11, 13, 14, 15, 17, the known upper bound for P(n, 3) is decreased by 3, 3, 9, 11, 1, 1, 4, respectively. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Physical Communication (18744907)
The diversity-multiplexing trade-off (DMT) of a diamond relay channel in a quasi-static slow Rayleigh fading environment is studied, where one of the relays operates in half-duplex mode while the other one is full-duplex; hence, the term hybrid is used to refer to this channel. DMT is a known powerful tool that captures the inherent tension between the reliability and the transmission rate in a wireless communication network at high signal-to-noise ratios (SNR). In this study, we have assumed that the average SNRs of different links vary at different rates, each represented by an exponent at high SNR. These average SNR exponents are categorized into two different classes: In Class I, both of the relays are placed either in close proximity to the source or in close proximity to the destination, whereas in Class II one of the relays is closer to the source while the other one is closer to the destination. Subsequently, for Class I and for a special symmetric case of Class II, an explicit expression of the DMT of the static quantize-map-and-forward (SQMF) strategy and the optimal static schedule in the half-duplex relay, are derived. The non-symmetric case of Class II can numerically be studied using the optimization problem provided in the paper for any given parameters. As a corollary, our analysis shows that for Class I, the SQMF strategy in the proposed hybrid diamond relay channel reaches the ideal full-duplex DMT upper bound if the degrees of freedom provided by the path through the full-duplex relay is greater than or equal to that provided by the path through the half-duplex relay, or if the degrees of freedom provided by the former is less than that provided by the latter, but still large enough to support the desired multiplexing gain. © 2023 Elsevier B.V.
IET Generation, Transmission and Distribution (17518687) (16)
The basis of traveling wave-based fault location methods is to extract the arrival times of transient signals in the power network. In this paper, a new method for extracting the traveling wave arrival times is presented. For this purpose, the aerial modes of three-phase voltage signals are extracted and two sequential sliding windows of unequal length move along with this signal. By fitting a line to the samples inside each of the two windows and calculating the angle between them, the traveling wave arrival times can be determined with high accuracy. Because this takes place in the time domain, there is no need to switch between time and frequency domains similar to those in Fourier transforms. On the other hand, fitting curves reduce the negative effects of noise and sampling frequency changes. EMTP-ATP is applied to perform the transient simulations and the results are then analysed in MATLAB to conduct the sensitivity analysis against the measurement noises, the sampling frequency, and the fault parameters. The proposed technique is compared to the common techniques such as discrete wavelet transforms and Hilbert Huang transforms. The results demonstrate that the proposed technique has acceptable performance and covers the drawbacks of common methods. © 2022 The Authors. IET Generation, Transmission & Distribution published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
IET Generation, Transmission and Distribution (17518687) (6)
This paper proposes a high-speed and accurate method for extracting the arrival times (ATs) of traveling waves (TWs) in the power system that can be used for fault location applications and transmission line protection. The proposed method is based on the expression of the traveling wave as the exponentially damped component superimposed on the sinusoidal wave in a small time window. Also the sine component is removed by using approximation and consecutive differences of the input signal samples (the modal components of three-phase voltages or currents) from the resulting wave. Due to the fact that the estimation of traveling wave arrival times (TWATs) is done applying basic mathematical operations in the time domain, the proposed method, in addition to the simplicity of implementation, has the appropriate accuracy and speed for applications such as online fault location and protection purposes. In order to evaluate the performance of the proposed approach in fault locating, a 230 kV transmission line is modelled in EMTP/ATP. Then, the fault location under various conditions of faults such as changes in location and type of fault, fault conditions as well as variations in the measurement noise level and sampling frequency are studied using MATLAB. © 2021 The Authors. IET Generation, Transmission & Distribution published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology
Electric Power Systems Research (03787796)
This paper proposes a single-ended traveling-wave-based fault location scheme for transmission systems consisting of an overhead line combined with an underground cable. In the first step, the proposed algorithm uses the three-phase voltages measured through optical sensors at the fault locator installation bus to extract the traveling wave arrival times using two consecutive sliding windows and curve fitting. Then, using the arrival times of three consecutive collisions and changes of their polarity, the faulty section is identified. Finally, the exact location of the fault is determined. To evaluate the performance of the proposed algorithm, transient state simulations were performed in EMTP-ATP for various conditions of fault and system parameters. The transient signals are processed and the results prove the high accuracy of the proposed method. © 2022
Abdollahi, A. ,
Bagherian, J. ,
Jafari, F. ,
Khatami bidgoli, M. ,
Parvaresh, F. ,
Sobhani, R.
In order to overcome the challenges posed by flash memories, the rank modulation scheme was proposed. In the rank modulation the codewords are permutations. In this paper, we study permutation codes with a specified length and minimum Kendall \tau-distance, and with as many codewords (permutations) as possible. We managed to make many significant improvements in the size of the best known codes. In particular, we show that for all n\geq 6 and for all \displaystyle \frac{3}{5}\begin{pmatrix}n\\2\end{pmatrix}\lt d\leq\frac{2}{3}\begin{pmatrix}n\\2\end{pmatrix} the largest size of a permutation code of length n and minimum distance at least d under Kendall \tau-metric is 4. © 2022 IEEE.
Probabilistic constellation shaping (PCS) has attracted a lot of attention for increasing the spectral efficiency (SE) in coherent fiber optic communication. Unfortunately, increasing the transmit power could increase non-linear self- and cross-phase modulations due to the Kerr effect in fiber optic. Owing to smaller peak-to-average power ratio (PAPR), higher angular distance between constellation points, and lower amplitude levels, amplitude and phase shift keying (APSK) can be more robust to phase noise than quadrature amplitude modulation (QAM). A new PCS scheme is proposed for the standard APSK 16 and 64 DVB-S2/S2X modulation formats. Simulation results in additive white gaussian noise (AWGN) channels show a close bit error rate performance for the proposed system compared to probabilistically-shaped QAM. © 2021 IEEE.
IET Communications (17518628) 15(14)pp. 1808-1820
In this paper, two-hop parallel N-relay networks are considered and the diversity-multiplexing tradeoff (DMT) is derived over Gamma-Gamma free-space optical (FSO) channels with identical average received signal to noise rations in all links. In the derivations, both local and global channel state information (CSI) are investigated. In the local CSI case, the node only knows its incoming link conditions, while in the global CSI case, the nodes are aware of all the CSIs in the network. The listening and transmitting times at the relays as variables in the DMT derivation are further considered which are later used to optimize the performance of the network. It is demonstrated that the optimal DMT is obtained with the static quantize map and forward (SQMF) and dynamic quantize map and forward (DQMF) strategies for different ranges of the multiplexing gain. In addition, the optimal schedule of relays in the DQMF strategy is determined as a function of the local channel conditions in the relays. © 2021 The Authors. IET Communications published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology
Physical Communication (18744907) 46
Two-hop parallel networks with half-duplex relays over quasi-static Rayleigh fading channels are considered. Assuming relays have only local channel state information, optimal communication schemes for these networks in terms of the diversity multiplexing trade-off (DMT) are obtained. It is shown that the dynamic quantize-map-and-forward (DQMF) and the static quantize-map-and-forward (SQMF) communication strategies achieve the optimal DMT at different ranges of the multiplexing gain. In the DQMF scheme, a relay sets its transmitting/listening times according to the instantaneous source-relay channel condition. However, in the SQMF scheme, a relay sets its transmitting/listening times to be a constant independent of the instantaneous source-relay channel gain. Essentially, we calculate the DMT of the network under the general setting for arbitrary average signal-to-noise ratios over links and it is shown that the DMT of the proposed schemes matches the upper bounds. Moreover, the optimal transmitting/listening times of the half-duplex relays as functions of the instantaneous channel realizations are derived for different communication strategies in the general setting. Finally, various simulation scenarios are considered and the DMT of the proposed schemes and the dynamic decode and forward communication strategy are compared. © 2021 Elsevier B.V.
IEEE Signal Processing Letters (10709908) 28pp. 1868-1872
In this paper, an empirical characteristic function (ECF) based estimator of the line-of-sight (LOS) power for the near user in an uplink non-orthogonal multiple access (NOMA) system with unknown impulsive noise is proposed. The channels between the users and the base station (BS) are assumed to have Rician fading. The performance of the proposed estimator is analytically evaluated, which demonstrates the asymptotic unbiasedness and consistency of the estimator. Furthermore, the approximate expression derived for the variance is confirmed through computer simulations. Numerical results show that the proposed estimator outperforms the previous estimator for the LOS power in the mixture of Gaussian and α-stable noise. © 1994-2012 IEEE.
Vehicular Communications (22142096) 29
In this paper, we derive an analytical expression for the bit error probability of a multi-threshold (MTh) detector in the downlink three-user non-orthogonal multiple access (NOMA) based vehicular communication system with unknown noise. The MTh detector directly detects the signal of vehicle user with the highest channel gain without detecting the signals of weaker users and removing their effects. It is shown that the bit error probability performance of the MTh detector is superior to that of the successive interference cancellation (SIC) detector in a special case. Then, we propose a blind empirical characteristic function (ECF) based method to estimate the signal levels of vehicle users, required to implement the MTh detector, in unknown noise. Next, we derive an analytical expression for the variance of the proposed ECF-based estimator which is validated via computer simulations. It is shown that the ECF-based estimator is asymptotically unbiased and consistent. Furthermore, we obtain the normalized Cramér-Rao lower bound (NCRLB) for the estimators of signal levels which shows that the ECF-based estimator is almost efficient in small generalized signal-to-noise ratio (GSNR) values. Numerical results show that the ECF-based MTh detector outperforms the absolute median-based SIC detector in the mixture of Gaussian and α-stable noise. © 2020 Elsevier Inc.
IET Radar, Sonar and Navigation (17518784) 14(3)pp. 487-494
In this study, the authors address the issue of clutter mitigation in monostatic pulse-Doppler radars, using sub-Nyquist sampling and compressed sensing (CS) recovery algorithms. Generally, clutter is modelled as a coloured random process and a whitening filter is employed to maximise the signal to clutter ratio. Here, the authors apply this approach to the case in which the received signal sampling rate is much smaller than the Nyquist rate. They also generalise the recently proposed sub-Nyquist method based on difference set (DS) codes, which uses very few samples to detect targets in the presence of only white Gaussian noise. By using the frequency-coded modulation waveform with the frequencies selected in accordance to the DS, the detection performance of the proposed method can be significantly improved compared with the previously proposed sub-Nyquist methods in the presence of clutter. Furthermore, the authors show that the proposed method can increase the target recovery resolution due to the features of the DS codes and the employed CS model. © The Institution of Engineering and Technology 2020.
IEEE Transactions on Parallel and Distributed Systems (10459219) (2)
We consider load balancing problem in a cache network consisting of storage-enabled servers forming a distributed content delivery scenario. Previously proposed load balancing solutions cannot perfectly balance out requests among servers, which is a critical issue in practical networks. Therefore, in this paper, we investigate a coded cache content placement where coded chunks of original files are stored in servers based on the files popularity distribution. In our scheme, upon each request arrival at the delivery phase, by dispatching enough coded chunks to the request origin from the nearest servers, the requested file can be decoded. Here, we show that if $n$n requests arrive randomly at $n$n servers, the proposed scheme results in the maximum load of O(1)O(1) in the network. This result is shown to be valid under various assumptions for the underlying network topology. Our results should be compared to the maximum load of two baseline schemes, namely, nearest replica and power of two choices strategies, which are Theta (n)Theta(logn) and Theta (n)Theta(loglogn), respectively. This finding shows that using coding, results in a considerable load balancing performance improvement, without compromising communications cost performance. This is confirmed by performing extensive simulation results, in non-asymptotic regimes as well. © 1990-2012 IEEE.
AEU - International Journal of Electronics and Communications (14348411)
Diversity-multiplexing tradeoff (DMT) of a full-duplex diamond network where the inter-relay interference (IRI) is suppressed, global channel state information (CSI) is available in all nodes, and perfect self-interference cancellation (SIC) technique is used, which we call it the perfect full-duplex diamond network, is an upper bound for the DMT of the equivalent half-duplex diamond network, i.e., the perfect full-duplex diamond network outperforms the half-duplex case. Nevertheless, the implementation of the perfect full-duplex diamond network is a high-cost operation and requires significant hardware changes in current systems. The main question addressed in this paper is: under which condition (or perhaps conditions) does a prevalent half-duplex diamond network provide the same performance as the perfect full-duplex one? In this regard, first, we calculate the DMT of a perfect full-duplex multiple-relay and multiple-antenna (MRMA) symmetric diamond network. Then, we demonstrate that for the same network but with half-duplex relays, when the number of antennas in relays satisfies a specific condition, there exists a fixed listen-transmit schedule combined with quantize-map-and-forward (QMF) relaying to achieve the optimal DMT. Also, we attain a lower bound for the total number of antennas of the MRMA symmetric diamond network to achieve the optimal DMT. © 2020 Elsevier GmbH
Wireless Networks (10220038) 26(5)pp. 3521-3537
Vehicular ad hoc networks (VANETs) are able to facilitate data exchange among vehicles and provide diverse data services. The benefits of cooperative communications are such as to make it a proper idea to improve the achievable rate and diversity in VANETs. We investigate a dual-hop relay vehicular network with different transmit powers in vehicles on non-uniform shadowed double Nakagami-m fading channels in the presence of non-uniform co-channel interferers. In this paper, we investigate the diversity-multiplexing tradeoff (DMT) in such a network for three different schemes, namely, dynamic decode and forward (DDF), dynamic quantize map and forward and static quantize map and forward and prove that the DDF strategy has the optimal DMT. Also, the optimal listening and transmitting time of vehicles in various strategies as a function of the channel conditions in vehicles are determined. Finally, we extend our results to a general multi-hop relay vehicular network model with multiple half-duplex relay terminals and different average SNRs over links. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
Wireless Personal Communications (09296212) (4)
Decentralized hierarchical coded caching is studied with two layers of caches which users receive their demands through intermediate helpers from a main server. Coded caching in hierarchical model reduces rates in both layers and this scheme has been shown to be order-optimal when all contents are uniformly popular and there is no tension between the rates in each of two layers. We consider two-layer coded caching over heterogeneous wireless networks which can reduce the rates more effectively in both layers. The contents in our work are multi-level, based on the different levels of popularity. We use two different approaches in our hierarchical model: when the coded caching is provided within each layer and when it is provided across multiple layers. We consider combination of these two approaches and then optimize the proportion of using each approach to minimize the rates of the first layer and the second layer. We show that the rates in our scheme are much lower than the traditional hierarchical model. Moreover, common-memory method is introduced in two-layer network and is compared with the other schemes. It can be observed that the rates of common-memory method are lower than the previous hierarchical model but are higher than our proposed multi-level scheme. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
IEEE Communications Letters (10897798) (11)
In this letter, we propose a novel estimator for pilot-aided orthogonal frequency division multiplexing (OFDM) channels in an additive Gaussian and impulsive perturbation environment. Due to sensor failure which might happen because of man-made noise, a number of measurements in high rate communication systems is often corrupted by impulsive noise. High power impulsive noise is generally an obstacle for OFDM systems as valuable information will be completely lost. To overcome this concern, an objective function based on a penalized atomic norm minimization (PANM) is provided in order to promote the sparsity of time dispersive channels and impulsive noise. The corresponding dual problem of the PANM is then converted to tractable semidefinite programming. It has shown that one can simultaneously estimate the time dispersive channels in a continuous dictionary and the location of impulsive noise using the dual problem. Several numerical experiments are carried out to evaluate the performance of the proposed estimator. © 1997-2012 IEEE.
IEEE Transactions on Signal Processing (19410476) 67(1)pp. 136-148
In this paper, we consider compressive sensing (CS)-based recovery of delays and Doppler frequencies of targets in high resolution radars. We propose a novel sub-Nyquist sampling method in the Fourier domain based on difference sets (DS), called DS-sampling, to create dictionaries with highly incoherent atoms. The coherence of the dictionary reaches the Welch minimum bound if the DS-sampling is employed. This property let us to implement sub-Nyquist high resolution radars with minimum number of samples.1 Two low-complexity recovery methods are developed and sufficient condition of target recovery with specific resolution obtained theoretically for noisy and noiseless conditions. We also propose a new waveform, called DS-frequency coded modulated waveform, to boost the recovery performance of the sub-Nyquist radar in noisy environments. The proposed method solves some of the common problems in many CS-based radars and overcome disadvantages of the conventional Nyquist processing (i.e., matched filtering) in high resolution radar systems. The proposed method allows us to design sub-Nyquist radars, which require less than 2% of Nyquist samples and recover targets without resolution degradation in comparison to the conventional Nyquist processing. © 2018 IEEE.
Multidimensional Systems And Signal Processing (15730824) 30(1)pp. 93-117
An anti-deception jamming technique is proposed for moving target indication in a pulse-Doppler (PD) radar. The deceive targets are produced by digital radio frequency memory, which tries to pull off the range and velocity gates of real targets. Similar to orthogonal frequency division multiplexing, we use different sets of orthogonal sub-carriers in consecutive coherent pulse intervals (CPIs). By changing sub-carriers in different CPIs, we show that the deceive targets appear as interference in receiving signals. The generalized likelihood ratio test is used for detection and discrimination of real targets. The performance of the proposed method is achieved analytically and by simulations. Furthermore, we implement a hardware block using a TMS6416-DSK DSP for a PD radar prototype exploiting the proposed algorithm to deception discrimination. The presented results demonstrate the good accordance with theoretical predictions. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.
IET Communications (17518628) 13(5)pp. 610-619
Dirty paper coding (DPC) is a non-linear transmit scheme in presence of known interference, which is capacity achieving. Unfortunately, the implementation complexity of DPC is usually prohibitive. A sub-optimal yet practical method to implement DPC in inter-symbol interference channels is Tomlinson-Harashima precoding (THP). However, THP is not applicable to the constant-envelope phase shift keying (PSK) modulation. In this study, a modified Tomlinson-Harashima precoder is proposed to implement DPC for PSK modulation. The technique involves design of decision region at the receiver side and an algorithm to select the phase of transmitted symbol optimally at the transmitter side accordingly. Various patterns, namely, striped, checked, hexagonal, and radial are proposed and examined. Simulation results show that by using the proposed equaliser, the error floor is eliminated and significantly smaller bit-error rate is achieved. Numerical and simulation results show that the striped pattern outperforms other decision region designs. Moreover, its implementation and computational complexities is almost the same as the Tomlinson-Harashima precoder. © The Institution of Engineering and Technology 2018.
IEEE Transactions on Signal Processing (1053587X) (13)
In this paper, we address the problem of recovering point sources from two-dimensional low-pass measurements, which is known as the super-resolution problem. This is the fundamental concern of many applications such as electronic imaging, optics, microscopy, and line spectral estimations. We assume that the point sources are located in the square [0,1]2 with unknown locations and complex amplitudes. The only available information is low-pass Fourier measurements band limited to the integer square [-fc, fc]2. The sources are estimated by minimizing the total variation norm, which leads to a convex optimization problem. We observe in numerical results that if the sources are closer than 1.4/fc, they might not be recovered by the proposed optimization. We theoretically show that there exists a dual certificate which guarantees exact recovery when the sources are separated by at least 1.68/fc, reducing the gap between the available theoretical guarantee for the source separations and the observed results in the simulations. © 2019 IEEE.
AEU - International Journal of Electronics and Communications (16180399) 84pp. 13-20
Cyclic frequency detection of cyclostationary signals and its application in modulation parameters estimation is the aim of this paper. A simple frequency shifted (FRESH) model is structured. The normalized least mean square (NLMS) algorithm is engaged to estimate the unknown weights of the considered model. The norm of the parameter estimate is related to the model accuracy. A high (low) value indicates that the FRESH structure is based on an accurate (inaccurate) cyclic frequency and the weight estimate of the NLMS algorithm converges to a nonzero (zero) vector. Accordingly for two given values which only one of them is the true cyclic frequency of the received signal, two FRESH structures are considered and simultaneously two NLMS algorithms are executed. If the norm of the weight estimate of the first algorithm exceeds the resulted norm of the second one, according to the proposed method, the first given value is the true cyclic frequency and vice versa. The method is used to estimate the modulation parameters of three different problems: chip rate estimation of a spread spectrum signal, carrier frequency and bit rate estimation of BPSK and QPSK signals, and symbol rate estimation of OFDM signals. © 2017 Elsevier GmbH
IEEE Transactions on Instrumentation and Measurement (00189456) 67(3)pp. 582-592
This paper presents a new filter-based algorithm to estimate the fundamental phasor of fault currents, using the proposed complex frequency filter. The presence of decaying dc component (ddc) in fault currents causes large error in discrete Fourier transform (DFT)-based phasor estimation methods. The proposed filter discriminates the ddc from dc and sinusoidal signals. In this algorithm, first, the ddc is filtered out by the proposed filter. Then, the fundamental phasor can be precisely extracted by applying DFT. Second and fourth order of the complex frequency filter and their design method are presented. To evaluate the performance of the proposed algorithm, several test signals are used. The algorithm is also compared with recent DFT-based and mimic-DFT methods, using four performance indices. The simulation results demonstrate the robustness of the proposed algorithm against harmonics, noise, multiple ddc components, and off-nominal frequency. It also offers faster convergence speed. The proposed complex frequency filter can also be used for the phasor estimation of harmonic components in the presence of ddc with no need to readjust its parameters. © 2018 IEEE.
IEEE Transactions on Instrumentation and Measurement (00189456) 67(11)pp. 2592-2602
Frequency estimation is vital for monitoring, control, and protection of power systems. Multiple signal classification (MUSIC) method has been used for frequency estimation in communication systems. This method requires wide range of frequency scanning and heavy computations, which is inappropriate for power system applications. This paper proposes an adaptive accelerated MUSIC algorithm for frequency estimation in power systems. The algorithm accelerates the frequency scanning and adapts itself to transient conditions of power systems, to keep the accuracy of the frequency estimation with minimum computations. Experimental signals besides several static and dynamic test signals are used to evaluate and compare the performance of the proposed algorithm with recent methods. These comparisons show that the proposed algorithm provides faster convergence speed, lower computation burden, and more robustness to noise and grid disturbances. These significant advantages make the proposed algorithm appropriate for power system protection and control applications. © 1963-2012 IEEE.
IEEE Transactions on Information Theory (00189448) (3)
The diversity-multiplexing trade-off (DMT) of half-duplex single-relay networks is computed analytically, where all the channels in the networks are assumed to be quasi-static flat-fading channels with independent Rayleigh distributions. We have computed DMT of dynamic quantize-map-and-forward (DQMF) strategy analytically, which has the optimal DMT for half-duplex single-relay networks with local channel state information. As a result, the optimal listening time of the relay in the DQMF strategy as a function of the source-relay channel realization can also be determined using our solution. © 1963-2012 IEEE.
Wireless Personal Communications (1572834X) 97(2)pp. 2781-2797
In general, signals transmitted by primary users (PUs) in a cognitive radio network have cyclostationary characteristics, whereas, the noise signals do not have any cyclostationary characteristics. Thus, detecting the existence of the PUs can be performed by measuring the cyclostationarity of the signals which are present in the communication channels. In this paper, we propose a sensing algorithm for secondary users (SUs) that uses a set of normalized least mean square (NMLS) adaptive filters in order to estimate the signal in the communication channel from its frequency shifted samples. When the received signal is cyclostationary, i.e. the PUs are transmitting, the norm of the NMLS filters’ weights at the related SU is anticipated to be nonzero. On the other hand, when the signal is totally the noise, the norm converges to zero. Therefore, in the proposed algorithm, the sensing is made by comparing the norm of weights to a threshold. We derive the probability of detection and false alarm and by simulations, we compare the performance of our algorithm to other known sensing algorithms with respect to the detection probability and complexity. © 2017, Springer Science+Business Media, LLC.
This paper studies the diversity-multiplexing trade-off (DMT) of half-duplex diamond relay channels. Diversity and multiplexing gains usually evaluate the performance of any relaying strategy, in wireless relay networks with fading channels. DMT captures the fundamental trade-off between these two gains, for communications at high signal to noise ratios (SNR). In this paper, we propose an efficient algorithm that computes the DMT of static-QMF communication strategy in half-duplex diamond relay channels. To find DMT of static QMF, one needs to solve a cumbersome optimization problem. The proposed algorithm, breaks the given optimization to finite1 number of sub-optimization problems, each one of which is a quasi-concave optimization problem. Then, using a binary search method, each sub-optimization problem is solved efficiently and the results of sub-optimization problems are combined to find DMT of the static QMF communication in the network. © 2016 IEEE.
In this paper, we address the problem of direction-of-arrival (DOA) estimation using a novel spatial sampling scheme based on difference set (DS) codes, called DS-spatial sampling. It is shown that the proposed DS-spatial sampling scheme can be modeled by a deterministic dictionary with minimum coherence. We also develop a low complexity compressed sensing (CS) model for DOA estimation. The proposed methods can reduce the number of array elements as well as the number of receivers. Compared with the conventional DOA estimation algorithm, the proposed sampling and processing method can achieve significantly higher resolution. © 2015 IEEE.
IEEE Transactions on Information Theory (00189448) (11)
Computing optimal half-duplex schedules in Gaussian relay networks is a challenging problem due to the lack of an exact capacity characterization and the large number of transmit-receive configurations that must be considered. We approach the problem using a constant-gap capacity approximation based on the cut-set bound with independent encoding at the nodes. We formulate an optimization problem to obtain the cut-set optimal half-duplex schedule and find that it is hard to solve in general. This is because it involves an exponential number of variables, since the number of ways to assign each node to either transmitter or receiver mode is exponential in the number of nodes. We present a general technique that takes advantage of specific structures in the topology of a given network and allows us to reduce the complexity of this problem. In certain classes of network topologies, our approach yields polynomial time algorithms for finding half-duplex schedules that achieve capacity within a constant gap. We use simulations to show running time improvements over alternative methods and compare the performance of various half-duplex scheduling approaches in different SNR regimes. © 2014 IEEE.
IEEE Transactions on Information Theory (00189448) (1)
When data traffic in a wireless network is bursty, small amounts of data sporadically become available for transmission, at times that are unknown at the receivers, and an extra amount of energy must be spent at the transmitters to overcome this lack of synchronization between the network nodes. In practice, predefined header sequences are used with the purpose of synchronizing the different network nodes. However, in networks where relays must be used for communication, the overhead required for synchronizing the entire network may be very significant. In this paper, we study the fundamental limits of energy-efficient communication in an asynchronous diamond network with two relays. We formalize the notion of relay synchronization by saying that a relay is synchronized if the conditional entropy of the arrival time of the source message given the received signals at the relay is small. We show that the minimum energy-per-bit for bursty traffic in diamond networks is achieved with a coding scheme where each relay is either synchronized or not used at all. A consequence of this result is the derivation of a lower bound to the minimum energy-per-bit for bursty communication in diamond networks. This bound allows us to show that schemes that perform the tasks of synchronization and communication separately (i.e., with synchronization signals preceding the communication block) can achieve the minimum energy-per-bit to within a constant fraction that ranges from 2 in the synchronous case to 1 in the highly asynchronous regime. © 1963-2012 IEEE.
IEEE Transactions on Information Theory (00189448) (3)
The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. For Gaussian networks, cut-set capacities are also functions of the powers assigned to the nodes. We consider a family of power optimization problems and show that they can be solved in a polynomial time. In particular, we show that the minimization of the sum of powers assigned to the nodes subject to a minimum rate constraint (measured in terms of cut-set bounds) can be computed in the polynomial time. We propose a heuristic algorithm to solve this problem and measure its performance through simulations on random Gaussian networks. We observe that in the optimal allocations, most of the power is assigned to a small subset of relays, which suggests that network simplification may be possible without excessive performance degradation. © 1963-2012 IEEE.
We consider computing the capacity of half-duplex relay networks with orthogonal channels. In such networks, if the network has a layered structure, we show that the capacity can be computed in polynomial using the ellipsoid method. Moreover, for networks with local connectivity, such that the connectivity radius is a constant independent of size of the network, a polynomial time algorithm is presented to compute the capacity. © 2013 IEEE.
IEEE International Symposium on Information Theory - Proceedings (21578095)
Computing the cut-set bound in half-duplex (HD) relay networks is a challenging optimization problem that involves an exponential number of variables and constraints (exponential in the number of nodes in the network). We present a general technique for efficiently computing the HD schedule that maximizes the cut-set bound (with i.i.d. input distribution) in layered Gaussian relay networks. We use simulations to show running time improvements over alternative methods and compare the performance of various HD scheduling approaches in different SNR regimes. © 2013 IEEE.
IEEE Transactions on Information Theory (00189448) (3)
A new communication scheme for Gaussian parallel relay networks based on superposition coding and partial decoding at the relays is presented. Some specific examples are proposed in which two codebook layers are superimposed. The first-level codebook is constructed with symbols from a binary or ternary alphabet, while the second-level codebook is composed of codewords chosen with Gaussian symbols. The new communication scheme is a generalization of decode-and-forward, amplify-and-forward, and bursty-amplify-and-forward. The asymptotic low-signal-to-noise-ratio regime is studied using achievable rates and minimum energy-per-bit as performance metrics. It is shown that the new scheme outperforms all previously known schemes for some channels and parameter ranges. © 1963-2012 IEEE.
Very often, constrained coding schemes impose nonlinear constraints on arrays and therefore it is usually nontrivial to determine how many arrays satisfy the given constraints. In this paper we show that there are non-trivial constrained coding scenarios where the number of constrained arrays can be estimated to a surprisingly high accuracy with the help of the sum-product algorithm, despite the fact that the underlying factor graphs have many short cycles, and we investigate the reasons why this is the case. These findings open up interesting possibilities for determining the number of constrained arrays also for scenarios where other counting and bounding techniques are not readily available. © 2012 IEEE.
SIAM Journal on Discrete Mathematics (08954801) (4)
Let An be the set of all n×n binary matrices in which the number of 1's in each row and column is at most n/2. We show that |An| = 2 n2 -ρn+δ√n. nO(1), for a constant ρ ≈ 1.42515, and δ = δ(n) ≈ 1.46016 for even n and 0 otherwise. Copyright © by SIAM.
When data traffic in a wireless network is bursty, small amounts of data sporadically become available for transmission, and the energy cost associated with synchronizing the network nodes prior to each communication block is not negligible. Therefore, designing energy-efficient communication schemes for such asynchronous scenarios is of particular importance. In this paper, we show that, for symmetric diamond networks, by performing the tasks of synchronization and communication separately, it is possible to achieve the minimum energy-per-bit to within a factor that ranges from 2 in the synchronous case to 1 in the highly asynchronous regime. © 2012 IEEE.
A new communication scheme for Gaussian diamond relay networks based on superposition coding and partial decoding at the relays is presented. A first level codebook is constructed with symbols chosen from a binary alphabet while a second level codebook is composed of codewords chosen with Gaussian symbols. The relays partially decode the message to identify when to amplify the incoming signals. The new communication scheme is a generalization of decode-and-forward, amplify-and-forward, and bursty-amplify-and-forward. The achievable rates are studied in the asymptotic low SNR regime. It is shown that the new scheme outperforms all previously known schemes for some channels and parameter ranges. © 2012 IEEE.
Consider the set A n of all n × n binary matrices in which the number of 1's in each row and column is at most n/2. We show that the redundancy, n 2 - log 2 |A n|, of this set equals ρn - δ√n + O(logn), for a constant ρ ≈ 1.42515, and δ = δ(n) ≈ 1.46016 for even n and 0 otherwise. © Copyright 2011 Hewlett-Packard Development Company, L.P.
IEEE International Symposium on Information Theory - Proceedings (21578104)
The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. © 2011 IEEE.
Alipour, Solmaz ,
Parvaresh, F. ,
Ghajari, Houman ,
Kimball, Donald F.
The 60 GHz band is an unlicensed band which features a wide communication bandwidth and a low probability of intercept due to high oxygen absorption. The wide bandwidth allows for a high volume of information to be transmitted wirelessly over a short interval of time. The signal confinement in both time and space makes the 60 GHz band an ideal candidate for a covert Body Area Network (BAN). Current and future soldiers can attain enhanced mobility and increased survivability by taking advantage of advancements in V-band WLAN technology. In this paper, we study the 60 GHz propagation conditions associated with a BAN, optimized for interconnecting various subsystems on a soldier. The 60 GHz wireless channel model study presented includes both human and pig measurement results and analytical channel simulations using ray tracing methods. Measurements were made using three different antennas - horn, slot and patch. The characterization of the channel is provided through path loss data and in some cases, S-parameter data from each node at different receiver points on a soldier's body and gear. ©2010 IEEE.
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (15206149)
We consider the deterministic construction of a measurement matrix and a recovery method for signals that are block sparse. A signal that has dimension N = nd, which consists of n blocks of size d, is called (s, d)-block sparse if only s blocks out of n are nonzero. We construct an explicit linear mapping Ω that maps the (s, d)-block sparse signal to a measurement vector of dimension M, where s | d < N (1 - (1 - M/N)d/d+1) - o(1). We show that if the (s, d)-block sparse signal is chosen uniformly at random then the signal can almost surely be reconstructed from the measurement vector in O(N3) computations. ©2008 IEEE.
IEEE Journal on Selected Topics in Signal Processing (19324553) (3)
Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity. © 2008 IEEE.
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (15206149)
DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. Typically, each microarray spot contains a large number of copies of a single probe designed to capture a single target, and hence collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Since only a small fraction of the total number of genes represented by the two samples is differentially expressed, a vast number of probe spots will not provide any useful information. To this end we consider an alternative design, the so-called compressed microarrays, wherein each spot is a composite of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. Moreover, we propose an algorithm which has far less computational complexity than the widely-used linear-programming-based methods, and can also recover signals with less sparsity. ©2008 IEEE.
Conference Record - Asilomar Conference on Signals, Systems and Computers (10586393)
Currently, DNA microarrays comprising tens of thousands of probe spots are employed to test entire genomes in a single experiment. Typically, each microarray spot contains a large number of copies of a single probe, and hence collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Since only a small fraction of the total number of genes represented by the two samples is differentially expressed, a large fraction of a microarray does not provide any useful information. To this end, in this paper we consider an alternative microarray design wherein each spot is a composite of several different probes, and the total number of spots is potentially much smaller than the number of genes being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. Experimental verification of the proposed methodology is presented. © 2007 IEEE.
IEEE International Symposium on Information Theory - Proceedings (21578101)
The multivariate interpolation decoding (MID) algorithm for certain Reed-Solomon codes was recently introduced by Parvaresh and Vardy. The MID algorithm attempts to list-decode up to nτMID = n(1-R M/(M+1)) errors, in a Reed-Solomon code of length n and rate R, using (M + 1)-variate polynomial interpolation. This improves on the Guruswami-Sudan decoding radius of τGS = 1 -√R by a large margin, especially for high-rate codes. The problem is that successful decoding is not guaranteed: there are certain patterns of less than nτMID errors which the MID algorithm fails to decode. Nevertheless, simulations show that the actual performance of the MID decoder is very close to what one would expect if all patterns of up to nτMID errors were corrected. On the other hand, analysis of the failure probability for the MID algorithm is extremely difficult, and there were no analytic results so far to confirm this empirically observed behavior. In this work, we provide such analytic results: we present a detailed analysis of the probability of failure in the MID algorithm for the special case where M = 2 and the interpolation multiplicity is m = 1. In this case, the MID algorithm attempts to correct up to nτ2,1 errors, where τ2,1 = 1 -3√6R2. We consider the situation where symbol values received from the channel at the erroneous positions are distributed uniformly at random (a version of the (q-ary symmetric channel). We show that, with high probability, the performance of the MID algorithm is very close to the optimum in this case. Specifically, we prove that if the fraction of positions in error is at most τ2,1 -O(R5/3), then the probability of failure in the MID algorithm is at most n ω(n). Thus the probability of failure is, indeed, negligible for large n in this case. © 2006 IEEE.
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS (02725428)
We introduce a new family of error-correcting codes that have a polynomial-time encoder and a polynomial-time listdecoder, correcting a fraction of adversarial errors up to τ M = l - M+1√M MR M where R is the rate of the code and M ≥1 is an arbitrary integer parameter. This makes it possible to decode beyond the Guruswami-Sudan radius of 1 - √R for all rates less than 1/16. Stated another way, for any ε > 0, we can list-decode in polynomial time a fraction of errors up to 1 - ε with a code of length n and rate Ω(ε/log(1/ε)), defined over an alphabet of size n M = n O(log(1/ε)). Notably, this error-correction is achieved in the worst-case against adversarial errors: a probabilistic model for the error distribution is neither needed nor assumed. The best results so far for polynomial-time list-decoding of adversarial errors required a rate of of O(ε 2) to achieve the correction radius of 1 -ε. Our codes and list-decoders are based on two key ideas. The first is the transition from bivariate polynomial interpolation, pioneered by Sudan and Guruswami-Sudan [12,22], to multivariate interpolation decoding. The second idea is to pan ways with Reed-Solomon codes, for which numerous prior attempts [2,3,12,18] at breaking the O(ε 2) rate barrier in the worst-case were unsuccessful. Rather than devising a better list-decoder for Reed-Solomon codes, we devise better codes. Standard Reed-Solomon encoders view a message as a polynomial f (X) over a field double-stract F sign q, and produce the corresponding codeword by evaluating f (X) at n distinct elements of double-stract F sign q. Herein, given f (X), we first compute one or more related polynomials g 1 (X), g 2 (X),.. ./g M-1 (X) and produce the corresponding codeword by evaluating all these polynomials. Correlation between f (X) and g i (X), carefully designed into our encoder, then provides the additional information we need to recover the encoded message from the output of the multivariate interpolation process. © 2005 IEEE.
IEEE International Symposium on Information Theory - Proceedings (21578097)
The main computational steps in algebraic soft-decoding and/or Sudan-type list-decoding of Reed-Solomon codes are interpolation and factorization. The interpolation consists of computing a bivariate polynomial ̧Q(X, Y) that passes through a prescribed set of points with prescribed multiplicities. Using the iterative algorithm of Koetter, this computation can be accomplished in time O(N2), where N is the number of linear equations satisfied by the coefficients of Q̧(X, Y). Here, we recast the iterative interpolation procedure of [2] as a computation of the product of a certain chain of polynomial matrices. We then derive a dynamic-programming algorithm which optimizes the multiplication order in computing this matrix chain. The resulting optimization reduces the number of finite-field operations required to compute Q(X, Y) by A factor of about two.
IEEE International Symposium on Information Theory - Proceedings (21578096)
A soft-decision decoding algorithm for Reed-Solomon codes was recently proposed in [2]. This algorithm converts probabilities observed at the channel output into algebraic interpolation conditions, specified in terms of a multiplicity matrix M. Koetter-Vardy [2] show that the probability of decoding failure is given by Pr{SM ≤ Δ(M)}, where SM is a random variable and Δ(M) is a known function of M. They then compute the multiplicity matrix M that maximizes the expected value of SM. Here, we attempt to directly minimize the overall probability of decoding failure Pr{SM ≤ Δ(M)}. First, we recast this optimization problem into a geometrical framework. Using this framework, we derive a simple modification to the KV algorithm which results in a provably better multiplicity assignment. Alternatively, we approximate the distribution of SM by a Gaussian distribution, and develop an iterative algorithm to minimize Pr{SM ≤ Δ(M)} under this approximation. This leads to coding gains of about 0.20 dB for RS codes of length 255 and up to 0.75 dB for RS codes of length 15, as compared to the Koetter-Vardy algorithm.
We study the diversity-multiplexing trade-off (DMT) of diamond relay channels. A diamond network contains a transmitter, two relays and a receiver. DMT of a network represents a fundamental trade-off between the reliability and rate of transmission with the diversity and multiplexing gains for wireless networks with fading channels. In this paper, we compute the DMT of full-duplex diamond relay channels and find the DMT of QMF communication strategy for half-duplex diamond relay channels. Also, a new genetic algorithm is introduced to solve the DMT of static QMF strategy for half-duplex diamond relay channels. By simulation, it is shown than the running time of the genetic algorithm is much better than the running time of finding the DMT of SQMF using grid based optimization techniques. The novelty of the proposed genetic algorithm is in using creative crossover and mutation operators in the algorithm which improved the results. © 2016 IEEE.
The diversity-multiplexing trade-off (DMT) expresses the optimal trade-off between the transmission rate and the error probability for communications at high signal to noise ratios (SNR) in wireless networks with fading channels. For half-duplex single relay networks with quasi-static fading channels, if the average signal to noise ratio (SNR) of the source-relay (S-R) link is equal to the average SNR of the relay-destination (RD) link, quantize-map-and-forward (QMF) and dynamic decode-and-forward (DDF) strategies are DMT optimal at high and low multiplexing gains, respectively. In this paper, we show that DDF and SQMF strategies are not generally DMT optimal in half-duplex single relay networks when the average SNR of the S-R link and R-D link are not equal. We show that DMT of dynamic QMF strategy is strictly greater than DMT of DDF and static QMF strategies for a specific half-duplex single relay network at certain multiplexing gains. © 2015 IEEE.
The problem of recovering a sparse L-ary signal from magnitudes of its Fourier transform or equivalently its autocorrelation function is considered. Although, one can show that solving the phase retrieval problem for L-ary signals is possible in time that is polynomial in L and length of the signal, however, the current algorithms are still not practical. We introduce a backtracking algorithm on a tree which solves the phase retrieval problem. By simulations, we show that the average number of nodes visited on the tree by the backtracking algorithm grows polynomially in L and signal length, although the search tree has an exponential size. © 2014 IEEE.