Prof. Janusz Szczepański, Ph.D., Dr. Habil. 

Doctoral thesis
1985  Równanie Liouville'a w nieskończenie wymiarowej ośrodkowej przestrzeni Hilberta
 416 
Habilitation thesis
20070614  Zastosowanie układów dynamicznych w kryptografii 
Professor
20140728  Title of professor 
Supervision of doctoral theses
1.  20151204  Paprocki Bartosz (UKW)  Analiza wydajności transmisji danych w komórkach i sieciach neuronowych metodami Teorii Informacji  668  
2.  20121129  Chmielowiec Andrzej  Wydajne metody generowania bezpiecznych parametrów algorytmów klucza publicznego  654 
Recent publications
1.  Paprocki B.^{♦}, Pręgowska A., Szczepański J., Optimizing information processing in braininspired neural networks, BULLETIN OF THE POLISH ACADEMY OF SCIENCES: TECHNICAL SCIENCES, ISSN: 02397528, DOI: 10.24425/bpasts.2020.131844, Vol.68, No.2, pp.225233, 2020 Abstract: The way brain networks maintain high transmission efficiency is believed to be fundamental in understanding brain activity. Brains consisting of more cells render information transmission more reliable and robust to noise. On the other hand, processing information in larger networks requires additional energy. Recent studies suggest that it is complexity, connectivity, and function diversity, rather than just size and the number of neurons, that could favour the evolution of memory, learning, and higher cognition. In this paper, we use Shannon information theory to address transmission efficiency quantitatively. We describe neural networks as communication channels, and then we measure information as mutual information between stimuli and network responses. We employ a probabilistic neuron model based on the approach proposed by Levy and Baxter, which comprises essential qualitative information transfer mechanisms. In this paper, we overview and discuss our previous quantitative results regarding braininspired networks, addressing their qualitative consequences in the context of broader literature. It is shown that mutual information is often maximized in a very noisy environment e.g., where only onethird of all input spikes are allowed to pass through noisy synapses and farther into the network. Moreover, we show that inhibitory connections as well as properly displaced longrange connections often significantly improve transmission efficiency. A deep understanding of brain processes in terms of advanced mathematical science plays an important role in the explanation of the nature of brain efficiency. Our results confirm that basic brain components that appear during the evolution process arise to optimise transmission performance. Keywords:neural network, entropy, mutual information, noise, inhibitory neuron Affiliations:
 
2.  Pręgowska A., Kaplan E.^{♦}, Szczepański J., How far can neural correlations reduce uncertainty? Comparison of information transmission rates for Markov and Bernoulli processes, International Journal of Neural Systems, ISSN: 01290657, DOI: 10.1142/S0129065719500035, Vol.29, No.8, pp.1950003113, 2019 Abstract: The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the firing rates of individual spikes (rate code) or by the precise timing of every spike (temporal code)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon's definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal code) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code). In a previous paper we found that a crucial role in the relation between information transmission rates (ITRs) and firing rates is played by a parameter s, which is the sum of transition probabilities from the nospike state to the spike state and vice versa. We found that in this case too a crucial role is played by the same parameter s. We calculated the maximal and minimal bounds of the quotient of ITRs for these sources. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source compared with the information in the corresponding Bernoulli source for a given word length. Our results show that in the case of correlated signals the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments. Keywords:Shannon information theory, information source, information transmission rate, firing rate, neural coding Affiliations:
 
3.  Pręgowska A., Proniewska K.^{♦}, van Dam P.^{♦}, Szczepański J., Using LempelZiv complexity as effective classification tool of the sleeprelated breathing disorders, Computer Methods and Programs in Biomedicine, ISSN: 01692607, DOI: 10.1016/j.cmpb.2019.105052, Vol.182, pp.10505217, 2019 Abstract: Background and objective: People suffer from sleep disorders caused by workrelated stress, irregular lifestyle or mental health problems. Therefore, development of effective tools to diagnose sleep disorders is important. Recently, to analyze biomedical signals Information Theory is exploited. We propose efficient classification method of sleep anomalies by applying entropy estimating algorithms to encoded ECGs signals coming from patients suffering from SleepRelated Breathing Disorders (SRBD). Methods: First, ECGs were discretized using the encoding method which captures the biosignals variability. It takes into account oscillations of ECG measurements around signals averages. Next, to estimate entropy of encoded signals Lempel–Ziv complexity algorithm (LZ) which measures patterns generation rate was applied. Then, optimal encoding parameters, which allow distinguishing normal versus abnormal events during sleep with high sensitivity and specificity were determined numerically. Simultaneously, subjects' states were identified using acoustic signal of breathing recorded in the same period during sleep. Results: Random sequences show normalized LZ close to 1 while for more regular sequences it is closer to 0. Our calculations show that SRBDs have normalized LZ around 0.32 (on average), while control group has complexity around 0.85. The results obtained to public database are similar, i.e. LZ for SRBDs around 0.48 and for control group 0.7. These show that signals within the control group are more random whereas for the SRBD group ECGs are more deterministic. This finding remained valid for both signals acquired during the whole duration of experiment, and when shorter time intervals were considered. Proposed classifier provided sleep disorders diagnostics with a sensitivity of 93.75 and specificity of 73.00%. To validate our method we have considered also different variants as a training and as testing sets. In all cases, the optimal encoding parameter, sensitivity and specificity values were similar to our results above. Conclusions: Our pilot study suggests that LZ based algorithm could be used as a clinical tool to classify sleep disorders since the LZ complexities for SRBD positives versus healthy individuals show a significant difference. Moreover, normalized LZ complexity changes are related to the snoring level. This study also indicates that LZ technique is able to detect sleep abnormalities in early disorders stage. Keywords:information theory, LempelZiv complexity, entropy, ECG, sleeprelated breathing disorders, randomness Affiliations:
 
4.  Błoński S., Pręgowska A., Michalek T., Szczepański J., The use of LempelZiv complexity to analyze turbulence and flow randomness based on velocity fluctuations, BULLETIN OF THE POLISH ACADEMY OF SCIENCES: TECHNICAL SCIENCES, ISSN: 02397528, DOI: 10.24425/bpasts.2019.130876, Vol.67, No.5, pp.957962, 2019 Abstract: One of the mathematical tools to measure the generation rate of new patterns along a sequence of symbols is the LempelZiv complexity (LZ). Under additional assumptions, LZ is an estimator of entropy in the Shannon sense. Since entropy is considered as a measure of randomness, this means that LZ can be treated also as a randomness indicator. In this paper, we used LZ concept to the analysis of different flow regimes in cold flow combustor models. Experimental data for two combustor's configurations motivated by efficient mixing need were considered. Extensive computer analysis was applied to develop a complexity approach to the analysis of velocity fluctuations recorded with hotwire anemometry and PIV technique. A natural encoding method to address these velocity fluctuations was proposed. It turned out, that with this encoding the complexity values of the sequences are well correlated with the values obtained by means of RMS method (larger/smaller complexity larger/smaller RMS). However, our calculations pointed out the interesting result that most complex, this means most random, behavior does not overlap with the "most turbulent" point determined by the RMS method, but it is located in the point with maximal average velocity. It seems that complexity method can be particularly useful to analyze turbulent and unsteady flow regimes. Moreover, the complexity can also be used to establish other flow characteristics like its ergodicity or mixing. Keywords:turbulence, complexity, entropy, randomness Affiliations:
 
5.  Pręgowska A., Casti A.^{♦}, Kaplan E.^{♦}, Wajnryb E., Szczepański J., Information processing in the LGN: a comparison of neural codes and cell types, BIOLOGICAL CYBERNETICS, ISSN: 03401200, DOI: 10.1007/s00422019008010, Vol.113, No.4, pp.453464, 2019 Abstract: To understand how anatomy and physiology allow an organism to perform its function, it is important to know how information that is transmitted by spikes in the brain is received and encoded. A natural question is whether the spike rate alone encodes the information about a stimulus (rate code), or additional information is contained in the temporal pattern of the spikes (temporal code). Here we address this question using data from the cat Lateral Geniculate Nucleus (LGN), which is the visual portion of the thalamus, through which visual information from the retina is communicated to the visual cortex. We analyzed the responses of LGN neurons to spatially homogeneous spots of various sizes with temporally random luminance modulation. We compared the Firing Rate with the Shannon Information Transmission Rate, which quantifies the information contained in the temporal relationships between spikes. We found that the behavior of these two rates can differ quantitatively. This suggests that the energy used for spiking does not translate directly into the information to be transmitted. We also compared Firing Rates with Information Rates for XON and XOFF cells. We found that, for XON cells the Firing Rate and Information Rate often behave in a completely different way, while for XOFF cells these rates are much more highly correlated. Our results suggest that for XON cells a more efficient "temporal code" is employed, while for XOFF cells a straightforward "rate code" is used, which is more reliable and is correlated with energy consumption. Shannon information theory, cat LGN, ON–OFF cells, neural coding, entropy, firing rate Affiliations:
 
6.  Pręgowska A., Szczepański J., Wajnryb E., Temporal code versus rate code for binary Information Sources, NEUROCOMPUTING, ISSN: 09252312, DOI: 10.1016/j.neucom.2016.08.034, Vol.216, pp.756762, 2016 Abstract: Neuroscientists formulate very different hypotheses about the nature of neural coding. At one extreme, it has been argued that neurons encode information through relatively slow changes in the arrival rates of individual spikes (rate codes) and that the irregularity in the spike trains reflects the noise in the system. At the other extreme, this irregularity is the code itself (temporal codes) so that the precise timing of every spike carries additional information about the input. It is well known that in the estimation of Shannon Information Transmission Rate, the patterns and temporal structures are taken into account, while the “rate code” is already determined by the firing rate, i.e. by the spike frequency. In this paper we compare these two types of codes for binary Information Sources, which model encoded spike trains. Assuming that the information transmitted by a neuron is governed by an uncorrelated stochastic process or by a process with a memory, we compare the Information Transmission Rates carried by such spike trains with their firing rates. Here we show that a crucial role in the relation between information transmission and firing rates is played by a factor that we call the “jumping” parameter. This parameter corresponds to the probability of transitions from the nospikestate to the spikestate and vice versa. For low jumping parameter values, the quotient of information and firing rates is a monotonically decreasing function of the firing rate, and there therefore a straightforward, onetoone, relation between temporal and rate codes. However, it turns out that for large enough values of the jumping parameter this quotient is a nonmonotonic function of the firing rate and it exhibits a global maximum, so that in this case there is an optimal firing rate. Moreover, there is no onetoone relation between information and firing rates, so the temporal and rate codes differ qualitatively. This leads to the observation that the behavior of the quotient of information and firing rates for a large jumping parameter value is especially important in the context of bursting phenomena. Keywords:Information Theory, Information Source, Stochastic process, Information transmission rate, Firing rate Affiliations:
 
7.  Pręgowska A., Szczepański J., Wajnryb E., Mutual information against correlations in binary communication channels, BMC NEUROSCIENCE, ISSN: 14712202, DOI: 10.1186/s1286801501680, Vol.16, No.32, pp.17, 2015 Abstract: Background Shannon information, Communication channel, Entropy, Mutual information, Correlation, Neuronal encoding Affiliations:
 
8.  Arnold M.M.^{♦}, Szczepański J., Montejo N.^{♦}, Amigó J.M.^{♦}, Wajnryb E., SanchezVives M.V.^{♦}, Information content in cortical spike trains during brain state transitions, JOURNAL OF SLEEP RESEARCH, ISSN: 09621105, DOI: 10.1111/j.13652869.2012.01031.x, Vol.22, pp.1321, 2013 Abstract: Even in the absence of external stimuli there is ongoing activity in the cerebral cortex as a result of recurrent connectivity. This paper attempts to characterize one aspect of this ongoing activity by examining how the information content carried by specific neurons varies as a function of brain state. We recorded from rats chronically implanted with tetrodes in the primary visual cortex during awake and sleep periods. Electroencephalogram and spike trains were recorded during 30min periods, and 2–4 neuronal spikes were isolated per tetrode offline. All the activity included in the analysis was spontaneous, being recorded from the visual cortex in the absence of visual stimuli. The brain state was determined through a combination of behavior evaluation, electroencephalogram and electromyogram analysis. Information in the spike trains was determined by using Lempel–Ziv Complexity. Complexity was used to estimate the entropy of neural discharges and thus the information content (Amigóet al. Neural Comput., 2004, 16: 717–736). The information content in spike trains (range 4–70 bits s−1) was evaluated during different brain states and particularly during the transition periods. Transitions toward states of deeper sleep coincided with a decrease of information, while transitions to the awake state resulted in an increase in information. Changes in both directions were of the same magnitude, about 30%. Information in spike trains showed a high temporal correlation between neurons, reinforcing the idea of the impact of the brain state in the information content of spike trains. Keywords:awake, brain states, entropy, firing rate, information, sleep, spike train Affiliations:
 
9.  Paprocki B.^{♦}, Szczepański J., How do the amplitude fluctuations affect the neuronal transmission efficiency, NEUROCOMPUTING, ISSN: 09252312, DOI: 10.1016/j.neucom.2012.11.001, Vol.104, pp.5056, 2013 Abstract: Tremendous effort is done to understand the nature of neuronal coding, its high efficiency and mechanisms governing it. Paprocki and Szczepanski [13] explored the model of neuron proposed by Levy and Baxter [12] and analyzed the efficiency with respect to the synaptic failure, activation threshold, firing rate and type of the input source. In this paper we study influence of the amplitude fluctuations (damping, uniform and amplifying), another important component in neuronal computations. The efficiency is understood in the sense of mutual information (between input and output signals), which constitutes the fundamental concept of the Shannon communication theory. Using high quality entropy estimators we determined maximal values of the mutual information between input and output neuronal signals for simple neuronal architecture. We observed that this maximal efficiency remains nearly constant, almost regardless of the fluctuation type. We have also found that for wide range of thresholds, both for damping and amplifying fluctuations, the mutual information behaves in an opposite way to the corresponding correlations between input and output signals. These calculations confirm that the neuronal coding is much more subtle than the straightforward intuitive optimization of input–output correlations. Keywords:Neural computation, Mutual information, Amplitude fluctuation, Activation threshold, Synaptic failure, Entropy estimation Affiliations:
 
10.  Paprocki B.^{♦}, Szczepański J., Transmission efficiency in ring, brain inspired neuronal networks. Informationand energetic aspects, Brain Research, ISSN: 00068993, DOI: 10.1016/j.brainres.2013.07.024, Vol.1536, pp.135143, 2013 Abstract: Organisms often evolve as compromises, and many of these compromises can be expressed in terms of energy efficiency. Thus, many authors analyze energetic costs processes during information transmission in the brain. In this paper we study information transmission rate per energy used in a class of ring, brain inspired neural networks, which we assume to involve components like excitatory and inhibitory neurons or longrange connections. Choosing model of neuron we followed a probabilistic approach proposed by Levy and Baxter (2002), which contains all essential qualitative mechanisms participating in the transmission process and provides results consistent with physiologically observed values. Information transmission efficiency, Mutual information, Brain inspired network, Inhibitory neuron, Longrange connection, Neuronal computation Affiliations:
 
11.  Paprocki B.^{♦}, Szczepański J., Kołbuk D., Information transmission efficiency in neuronal communication systems, BMC NEUROSCIENCE, ISSN: 14712202, DOI: 10.1186/1471220214S1P217, Vol.14(Suppl 1), No.P217, pp.12, 2013 Abstract: The nature and efficiency of brain transmission processes, its high reliability and efficiency is one of the most elusive area of contemporary science [1]. We study information transmission efficiency by considering a neuronal communication as a Shannontype channel. Thus, using high quality entropy estimators, we evaluate the mutual information between input and output signals. We assume model of neuron proposed by Levy and Baxter [2], which incorporates all essential qualitative mechanisms participating in neural transmission process. Keywords:transmission efficiency, neuronal communication, Shannontype channe Affiliations:
 
12.  Szczepański J., Arnold M.^{♦}, Wajnryb E., Amigó J.M.^{♦}, SanchezVives M.V.^{♦}, Mutual information and redundancy in spontaneous communication between cortical neurons, BIOLOGICAL CYBERNETICS, ISSN: 03401200, DOI: 10.1007/s004220110425y, Vol.104, pp.161174, 2011 Abstract: An important question in neural information processing is how neurons cooperate to transmit information. To study this question, we resort to the concept of redundancy in the information transmitted by a group of neurons and, at the same time, we introduce a novel concept for measuring cooperation between pairs of neurons called relative mutual information (RMI). Specifically, we studied these two parameters for spike trains generated by neighboring neurons from the primary visual cortex in the awake, freely moving rat. The spike trains studied here were spontaneously generated in the cortical network, in the absence of visual stimulation. Under these conditions, our analysis revealed that while the value of RMI oscillated slightly around an average value, the redundancy exhibited a behavior characterized by a higher variability. We conjecture that this combination of approximately constant RMI and greater variable redundancy makes information transmission more resistant to noise disturbances. Furthermore, the redundancy values suggest that neurons can cooperate in a flexible way during information transmission. This mostly occurs via a leading neuron with higher transmission rate or, less frequently, through the information rate of the whole group being higher than the sum of the individual information rates—in other words in a synergetic manner. The proposed method applies not only to the stationary, but also to locally stationary neural signals. Keywords:Neurons, Shannon information, Entropy, Mutual information, Redundancy, Visual cortex, Spikes train, Spontaneous activity Affiliations:
 
13.  Paprocki B.^{♦}, Szczepański J., Efficiency of neural transmission as a function of synaptic noise, threshold, and source characteristics, BIOSYSTEMS, ISSN: 03032647, DOI: 10.1016/j.biosystems.2011.03.005, Vol.105, pp.6272, 2011 Abstract: There has been a growing interest in the estimation of information carried by a single neuron and multiple single units or population of neurons to specific stimuli. In this paper we analyze, inspired by article of Levy and Baxter (2002), the efficiency of a neuronal communication by considering dendrosomatic summation as a Shannontype channel (1948) and by considering such uncertain synaptic transmission as part of the dendrosomatic computation. Specifically, we study Mutual Information between input and output signals for different types of neuronal network architectures by applying efficient entropy estimators. We analyze the influence of the following quantities affecting transmission abilities of neurons: synaptic failure, activation threshold, firing rate and type of the input source. We observed a number of surprising nonintuitive effects. It turns out that, especially for lower activation thresholds, significant synaptic noise can lead even to twofold increase of the transmission efficiency. Moreover, the efficiency turns out to be a nonmonotonic function of the activation threshold. We find a universal value of threshold for which a local maximum of Mutual Information is achieved for most of the neuronal architectures, regardless of the type of the source (correlated and noncorrelated). Additionally, to reach the global maximum the optimal firing rates must increase with the threshold. This effect is particularly visible for lower firing rates. For higher firing rates the influence of synaptic noise on the transmission efficiency is more advantageous. Noise is an inherent component of communication in biological systems, hence, based on our analysis, we conjecture that the neuronal architecture was adjusted to make more effective use of this attribute. Keywords:Neuronal computation, Entropy, Mutual Information, Estimators, Neuron, Quantal failure, Activation threshold, Neural network Affiliations:
 
14.  Szczepański J., On the distribution function of the complexity of finite sequences, INFORMATION SCIENCES, ISSN: 00200255, DOI: 10.1016/j.ins.2008.12.019, Vol.179, pp.12171220, 2009 Abstract: Investigations of complexity of sequences lead to important applications such as effective data compression, testing of randomness, discriminating between information sources and many others. In this paper we establish formulae describing the distribution functions of random variables representing the complexity of finite sequences introduced by Lempel and Ziv in 1976. It is known that this quantity can be used as an estimator of entropy. We show that the distribution functions depend affinely on the probabilities of the socalled ‘‘exact” sequences. Keywords:Lempel–Ziv complexity, Distribution function, Randomness Affiliations:
 
15.  Nagarajan R.^{♦}, Szczepański J., Wajnryb E., Interpreting nonrandom signatures in biomedical signals with LempelZiv complexity, PHYSICA DNONLINEAR PHENOMENA, ISSN: 01672789, DOI: 10.1016/j.physd.2007.09.007, Vol.237, pp.359364, 2008 Abstract: Lempel–Ziv complexity (LZ) [J. Ziv, A. Lempel, On the complexity of finite sequences, IEEE Trans. Inform. Theory 22 (1976) 75–81] and its variants have been used widely to identify nonrandom patterns in biomedical signals obtained across distinct physiological states. Nonrandom signatures of the complexity measure can occur under nonlinear deterministic as well as nondeterministic settings. Surrogate data testing have also been encouraged in the past in conjunction with complexity estimates to make a finer distinction between various classes of processes. In this brief letter, we make two important observations (1) NonGaussian noise at the dynamical level can elude existing surrogate algorithms namely: Phaserandomized surrogates (FT) amplitudeadjusted Fourier transform (AAFT) and iterated amplitudeadjusted Fourier transform (IAAFT). Thus any inference nonlinear determinism as an explanation for the nonrandomness is incomplete (2) Decrease in complexity can be observed even across two linear processes with identical autocorrelation functions. The results are illustrated with a secondorder autoregressive process with Gaussian and nonGaussian innovations. AR(2) processes have been used widely to model several physiological phenomena, hence their choice. The results presented encou rage cautious interpretation of nonrandom signatures in experimental signals using complexity measures. Keywords:Lempel–Ziv complexity, Surrogate testing, Autoregressive process Affiliations:
 
16.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., On some properties of the discrete Lyapunov exponent, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2008.07.076, Vol.372, pp.62656268, 2008 Abstract: One of the possible byproducts of discrete chaos is the application of its tools, in particular of the discrete Lyapunov exponent, to cryptography. In this Letter we explore this question in a very general setting. Affiliations:
 
17.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., Theory and practice of chaotic cryptography, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2007.02.021, Vol.366, pp.211216, 2007 Abstract: In this Letter we address some basic questions about chaotic cryptography, not least the very definition of chaos in discrete systems. We propose a conceptual framework and illustrate it with different examples from private and public key cryptography. We elaborate also on possible limits of chaotic cryptography. Affiliations:
 
18.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., Discrete Lyapunov exponent and resistance to differential cryptanalysis, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS IIEXPRESS BRIEFS, ISSN: 15497747, DOI: 10.1109/TCSII.2007.901576, Vol.54, No.10, pp.882886, 2007 Abstract: In a recent paper, Jakimoski and Subbalakshmi provided a nice connection between the socalled discrete Lyapunov exponent of a permutation F defined on a finite lattice and its maximal differential probability, a parameter that measures the complexity of a differential cryptanalysis attack on the substitution defined by F. In this brief, we take a second look at their result to find some practical shortcomings. We also discuss more general aspects. Keywords:Differential cryptanalysis, discrete Lyapunov exponent (DLE), maximum differential probability (DP) Affiliations:
 
19.  Kocarev L.^{♦}, Szczepański J., Amigó J.M.^{♦}, Tomovski I.^{♦}, Discrete chaos  I: theory, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS IREGULAR PAPERS, ISSN: 15498328, DOI: 10.1109/TCSI.2006.874181, Vol.53, No.6, pp.13001309, 2006 Abstract: We propose a definition of the discrete Lyapunov exponent for an arbitrary permutation of a finite lattice. For discretetime dynamical systems, it measures the local (between neighboring points) average spreading of the system. We justify our definition by proving that, for large classes of chaotic maps, the corresponding discrete Lyapunov exponent approaches the largest Lyapunov exponent of a chaotic map when Mrarrinfin, where M is the cardinality of the discrete phase space. In analogy with continuous systems, we say the system has discrete chaos if its discrete Lyapunov exponent tends to a positive number, when Mrarrinfin. We present several examples to illustrate the concepts being introduced. Keywords:Chaos, discrete chaos, Lyapunov components Affiliations:
 
20.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., Order patterns and chaos, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2006.01.093, Vol.355, pp.2731, 2006 Abstract: Chaotic maps can mimic random behavior in a quite impressive way. In particular, those possessing a generating partition can produce any symbolic sequence by properly choosing the initial state. We study in this Letter the ability of chaotic maps to generate order patterns and come to the conclusion that their performance in this respect falls short of expectations. This result reveals some basic limitation of a deterministic dynamic as compared to a random one. This being the case, we propose a nonstatistical test based on ‘forbidden’ order patterns to discriminate chaotic from truly random time series with, in principle, arbitrarily high probability. Some relations with discrete chaos and chaotic cryptography are also discussed. Keywords:Chaotic maps, Order patterns, Permutation entropy, Discrete Lyapunov exponent, Chaotic cryptography Affiliations:
 
21.  Szczepański J., Amigó J.M.^{♦}, Michałek T., Kocarev L.^{♦}, Cryptographically secure substitutions based on the approximation of mixing maps, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS IREGULAR PAPERS, ISSN: 15498328, DOI: 10.1109/TCSI.2004.841602, Vol.52, No.2, pp.443453, 2005 Abstract: In this paper, we explore, following Shannon’s suggestion that diffusion should be one of the ingredients of resistant block ciphers, the feasibility of designing cryptographically secure substitutions (think of Sboxes, say) via approximation of mixing maps by periodic transformations. The expectation behind this approach is, of course, that the nice diffusion properties of such maps will be inherited by their approximations, at least if the convergence rate is appropriate and the associated partitions are sufficiently fine. Our results show that this is indeed the case and that, in principle, block ciphers with closetooptimal immunity to linear and differential cryptanalysis (as measured by the linear and differential approximation probabilities) can be designed along these guidelines. We provide also practical examples and numerical evidence for this approximation philosophy. Keywords:Black cipher, differential cryptanalysis, linear cryptanalysis, mixing dynamical system, periodic approximation, S box Affiliations:
 
22.  Amigó J.M.^{♦}, Szczepański J., Kocarev L.^{♦}, A chaosbased approach to the design of cryptographically secure substitutions, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2005.05.057, Vol.343, pp.5560, 2005 Abstract: We show that chaotic maps may be used for designing socalled substitution boxes for ciphers resistant to linear and differential cryptanalysis, providing an alternative to the algebraic methods. Our approach is based on the approximation of mixing maps by periodic transformations. The expectation behind is, of course, that the nice chaotic properties of such maps will be inherited by their approximations, at least if the convergence rate is appropriate and the associated partitions are sufficiently fine. We show that this is indeed the case and that, in principle, substitutions with closetooptimal immunity to linear and differential cryptanalysis can be designed along these guidelines. We provide also practical examples and numerical evidence for this approximation philosophy Keywords:Chaotic maps, Periodic approximations, Bit permutations, Cryptanalysis Affiliations:
 
23.  Kocarev L.^{♦}, Szczepański J., Finitespace Lyapunov exponents and pseudochaos, PHYSICAL REVIEW LETTERS, ISSN: 00319007, DOI: 10.1103/PhysRevLett.93.234101, Vol.93, pp.23410114, 2004 Abstract: We propose a definition of finitespace Lyapunov exponent. For discretetime dynamical systems, it measures the local (between neighboring points) average spreading of the system. We justify our definition by showing that, for large classes of chaotic maps, the corresponding finitespace Lyapunov exponent approaches the Lyapunov exponent of a chaotic map when M→∞, where M is the cardinality of the discrete phase space. In analogy with continuous systems, we say the system has pseudochaos if its finitespace Lyapunov exponent tends to a positive number (or to +∞), when M→∞. Affiliations:
 
24.  Szczepański J., Amigó J.M.^{♦}, Wajnryb E., SanchezVives M.V.^{♦}, Characterizing spike trains with LempelZiv complexity, NEUROCOMPUTING, ISSN: 09252312, DOI: 10.1016/j.neucom.2004.01.026, Vol.5860, pp.7984, 2004 Abstract: We review several applications of Lempel–Ziv complexity to the characterization of neural responses. In particular, Lempel–Ziv complexity allows to estimate the entropy of binned spike trains in an alternative way to the usual method based on the relative frequencies of words, with the definitive advantage of no requiring very long registers. We also use complexity to discriminate neural responses to different kinds of stimuli and to evaluate the number of states of neuronal sources. Keywords:Lempel–Ziv complexity, Entropy, Spike trains, Neuronal sources Affiliations:
 
25.  Amigó J.M.^{♦}, Szczepański J., Wajnryb E., SanchezVives M.V.^{♦}, Estimating the Entropy Rate of Spike Trains via LempelZiv Complexity , Neural Computation, ISSN: 08997667, DOI: 10.1162/089976604322860677, Vol.16, No.4, pp.717736, 2004 Abstract: Normalized LempelZiv complexity, which measures the generation rate of new patterns along a digital sequence, is closely related to such important source properties as entropy and compression ratio, but, in contrast to these, it is a property of individual sequences. In this article, we propose to exploit this concept to estimate (or, at least, to bound from below) the entropy of neural discharges (spike trains). The main advantages of this method include fast convergence of the estimator (as supported by numerical simulation) and the fact that there is no need to know the probability law of the process generating the signal. Furthermore, we present numerical and experimental comparisons of the new method against the standard method based on word frequencies, providing evidence that this new approach is an alternative entropy estimator for binned spike trains. Affiliations:
 
26.  Szczepański J., Wajnryb E., Amigó J.M.^{♦}, SanchezVives M.V.^{♦}, Slater M.
^{♦}, Biometric random number generators
, COMPUTERS AND SECURITY, ISSN: 01674048, DOI: 10.1016/S01674048(04)000641, Vol.23, No.1, pp.7784, 2004
Abstract: Up to now biometric methods have been used in cryptography for authentication purposes. In this paper we propose to use biological data for generating sequences of random bits. We point out that this new approach could be particularly useful to generate seeds for pseudorandom number generators and socalled “key sessions”. Our method is very simple and is based on the observation that, for typical biometric readings, the last binary digits fluctuate “randomly”. We apply our method to two data sets, the first based on animal neurophysiological brain responses and the second on human galvanic skin response. For comparison we also test our approach on numerical samplings of the Ornstein–Uhlenbeck stochastic process. To verify the randomness of the sequences generated, we apply the standard suite of statistical tests (FIPS 1402) recommended by the National Institute of Standard and Technology for studying the quality of the physical random number generators, especially those implemented in cryptographic modules. Additionally, to confirm the high cryptographic quality of the biometric generators, we also use the often recommended Maurer's universal test and the Lempel–Ziv complexity test, which estimate the entropy of the source. The results of all these verifications show that, after appropriate choice of encoding and experimental parameters, the sequences obtained exhibit excellent statistical properties, which opens the possibility of a new design technology for true random number generators. It remains a challenge to find appropriate biological phenomena characterized by easy accessibility, fast sampling rate, high accuracy of measurement and variability of sampling rate. Affiliations:
 
27.  Amigó J.M.^{♦}, Szczepański J., Wajnryb E., SanchezVives M.V.^{♦}, On the number of states of the neuronal sources, BIOSYSTEMS, ISSN: 03032647, DOI: 10.1016/S03032647(02)001569, Vol.68, No.1, pp.5766, 2003 Abstract: In a previous paper (Proceedings of the World Congress on Neuroinformatics (2001)) the authors applied the socalled Lempel–Ziv complexity to study neural discharges (spike trains) from an informationtheoretical point of view. Along with other results, it is shown there that this concept of complexity allows to characterize the responses of primary visual cortical neurons to both random and periodic stimuli. To this aim we modeled the neurons as information sources and the spike trains as messages generated by them. In this paper, we study further consequences of this mathematical approach, this time concerning the number of states of such neuronal information sources. In this context, the state of an information source means an internal degree of freedom (or parameter) which allows outputs with more general stochastic properties, since symbol generation probabilities at every time step may additionally depend on the value of the current state of the neuron. Furthermore, if the source is ergodic and Markovian, the number of states is directly related to the stochastic dependence lag of the source and provides a measure of the autocorrelation of its messages. Here, we find that the number of states of the neurons depends on the kind of stimulus and the type of preparation ( in vivo versus in vitro recordings), thus providing another way of differentiating neuronal responses. In particular, we observed that (for the encoding methods considered) in vitro sources have a higher lag than in vivo sources for periodic stimuli. This supports the conclusion put forward in the paper mentioned above that, for the same kind of stimulus, in vivo responses are more random (hence, more difficult to compress) than in vitro responses and, consequently, the former transmit more information than the latter. Keywords:Spike trains, Encoding, Lempel–Ziv complexity, Entropy, Internal states, Numerical invariants for neuronal responses Affiliations:
 
28.  Szczepański J., Amigó J.M.^{♦}, Wajnryb E., SanchezVives M.V.^{♦}, Application of Lempel–Ziv complexity to the analysis of neural discharges, Network: Computation in Neural Systems, ISSN: 0954898X, DOI: 10.1088/0954898X_14_2_309, Vol.14, No.2, pp.335350, 2003 Abstract: Pattern matching is a simple method for studying the properties of information sources based on individual sequences (Wyner et al 1998 IEEE Trans. Inf. Theory 44 2045–56). In particular, the normalized Lempel–Ziv complexity (Lempel and Ziv 1976 IEEE Trans. Inf. Theory 22 75–88), which measures the rate of generation of new patterns along a sequence, is closely related to such important source properties as entropy and information compression ratio. We make use of this concept to characterize the responses of neurons of the primary visual cortex to different kinds of stimulus, including visual stimulation (sinusoidal drifting gratings) and intracellular current injections (sinusoidal and random currents), under two conditions (in vivo and in vitro preparations). Specifically, we digitize the neuronal discharges with several encoding techniques and employ the complexity curves of the resulting discrete signals as fingerprints of the stimuli ensembles. Our results show, for example, that if the neural discharges are encoded with a particular oneparameter method (‘interspike time coding’), the normalized complexity remains constant within some classes of stimuli for a wide range of the parameter. Such constant values of the normalized complexity allow then the differentiation of the stimuli classes. With other encodings (e.g. ‘bin coding’), the whole complexity curve is needed to achieve this goal. In any case, it turns out that the normalized complexity of the neural discharges in vivo are higher (and hence carry more information in the sense of Shannon) than in vitro for the same kind of stimulus. Affiliations:
 
29.  Amigó J.M.^{♦}, Szczepański J., Approximations of Dynamical Systems and Their Applications to Cryptography, International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, ISSN: 02181274, DOI: 10.1142/S0218127403007771, Vol.13, No.7, pp.19371948, 2003 Abstract: During the last years a new approach to construct safe block and stream ciphers has been developed using the theory of dynamical systems. Since a block cryptosystem is generally, from the mathematical point of view, a family (parametrized by the keys) of permutations of nbit numbers, one of the main problems of this approach is to adapt the dynamics defined by a map f to the block structure of the cryptosystem. In this paper we propose a method based on the approximation of f by periodic maps Tn (v.g. some interval exchange transformations). The approximation of automorphisms of measure spaces by periodic automorphisms was introduced by Halmos and Rohlin. One important aspect studied in our paper is the relation between the dynamical properties of the map f (say, ergodicity or mixing) and the immunity of the resulting cipher to cryptolinear attacks, which is currently one of the standard benchmarks for cryptosystems to be considered secure. Linear cryptanalysis, first proposed by M. Matsui, exploits some statistical inhomogeneities of expressions called linear approximations for a given cipher. Our paper quantifies immunity to cryptolinear attacks in terms of the approximation speed of the map f by the periodic Tn. We show that the most resistant block ciphers are expected when the approximated dynamical system is mixing. Affiliations:
 
30.  Szczepański J, Michałek T., Random Fields Approach to the Study of DNA Chains, Journal of Biological Physics, ISSN: 00920606, DOI: 10.1023/A:1022508206826, Vol.29, pp.3954, 2003 Abstract: We apply the random field theory tothe study of DNA chains which we assume tobe trajectories of a stochastic process. Weconstruct statistical potential betweennucleotides corresponding to theprobabilities of those trajectories thatcan be obtained from the DNA data basecontaining millions of sequences. It turnsout that this potential has aninterpretation in terms of quantitiesnaturally arrived at during the study ofevolution of species i.e. probabilities ofmutations of codons. Making use of recentlyperformed statistical investigations of DNAwe show that this potential has differentqualitative properties in coding andnoncoding parts of genes. We apply ourmodel to data for various organisms andobtain a good agreement with the resultsjust presented in the literature. We alsoargue that the coding/noncoding boundariescan corresponds to jumps of the potential. Keywords:codons, DNA chain, entropy, exons, introns, mutation, random field, stochastic process Affiliations:
 
31.  Szczepański J., A new result on the Nirenberg problem for expanding maps, Nonlinear Analysis: Theory, Methods & Applications, ISSN: 0362546X, DOI: 10.1016/S0362546X(99)001807, Vol.43, No.1, pp.9199, 2001 Keywords: Nirenberg problem, Expanding map Affiliations:
 
32.  Szczepański J, Kotulski Z., Pseudorandom Number Generators Based on Chaotic Dynamical Systems, Open Systems & Information Dynamics, ISSN: 12301612, DOI: 10.1023/A:1011950531970, Vol.8, No.2, pp.137146, 2001 Abstract: Pseudorandom number generators are used in many areas of contemporary technology such as modern communication systems and engineering applications. In recent years a new approach to secure transmission of information based on the application of the theory of chaotic dynamical systems has been developed. In this paper we present a method of generating pseudorandom numbers applying discrete chaotic dynamical systems. The idea of construction of chaotic pseudorandom number generators (CPRNG) intrinsically exploits the property of extreme sensitivity of trajectories to small changes of initial conditions, since the generated bits are associated with trajectories in an appropriate way. To ensure good statistical properties of the CPRBG (which determine its quality) we assume that the dynamical systems used are also ergodic or preferably mixing. Finally, since chaotic systems often appear in realistic physical situations, we suggest a physical model of CPRNG. Affiliations:
 
33.  Amigó J.M.^{♦}, Szczepański J., A Conceptual Guide to Chaos Theory, Prace IPPT  IFTR Reports, ISSN: 22993657, No.9, pp.143, 1999  
34.  Kotulski Z., Szczepański J., Górski K.^{♦}, Paszkiewicz A.^{♦}, Zugaj A.^{♦}, Application of discrete chaotic dynamical systems in cryptography  DCC method, International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, ISSN: 02181274, DOI: 10.1142/S0218127499000778, Vol.9, No.6, pp.11211135, 1999 Abstract: In the paper we propose a method of constructing cryptosystems, utilizing a nonpredictability property of discrete chaotic systems. We point out the requirements for such systems to ensure their security. The presented algorithms of encryption and decryption are based on multiple iteration of a certain dynamical chaotic system coming from gas dynamics models. A plaintext message specifies a part of the initial condition of the system (a particle's initial position). A secret key specifies the remaining part of initial condition (the particle's initial angle) as well as a sequence of discrete choices of the preimages in the encryption procedure. We also discuss problems connected with the practical realization of such chaotic cryptosystems. Finally we demonstrate numerical experiments illustrating the basic properties of the proposed cryptosystem. Affiliations:
 
35.  Szczepański J., On a problem of Nirenberg concerning expanding maps in Hilbert space, Proceedings of the American Mathematical Society, ISSN: 10886826, DOI: 10.2307/2159486, Vol.116, No.4, pp.10411044, 1992 Abstract: Let H be a Hubert space and f: H —» H a continuous map which is expanding (i.e., f(x) — f(y) >= x  y for all x, y ε H) and such that f(H) has nonempty interior. Are these conditions sufficient to ensure that f is onto? This question was stated by Nirenberg in 1974. In this paper we give a partial negative answer to this problem; namely, we present an example of a map F: H —» H which is not onto, continuous, F(H) has nonempty interior, and for every x, y ε H there is no ε N (depending on x and y) such that for every n > no F^n(x)F^n(y)>=c^(nm)xy where F^n is the nth iterate of the map F , c is a constant greater than 2, and m is an integer depending on x and y. Our example satisfies F(x) = cx for all x ε H . We show that no map with the above properties exists in the finitedimensional case. Affiliations:

List of chapters in recent monographs
1. 555  Dobrosielski W.T.^{♦}, Czerniak J.M.^{♦}, Zarzycki H.^{♦}, Szczepański J., Theory and Applications of Ordered Fuzzy Numbers. Studies in Fuzziness and Soft Computing, rozdział: Fuzzy Numbers Applied to a Heat Furnace Control, Springer, 356, pp.269288, 2017 
Conference papers
1.  Dobrosielski W.T.^{♦}, Czerniak J.M.^{♦}, Szczepański J., Zarzycki H.^{♦}, Two New Defuzzification Methods Useful for Different Fuzzy Arithmetics, BOS2016/IWIFSGN2016, 14th National Conference of Operational and Systems Research (BOS) / 15th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets (IWIFSGN), 20161012/1014, Warszawa (PL), DOI: 10.1007/9783319655451_9, Vol.559, pp.83101, 2018 Abstract: One of the many reasons why a human searches for new solutions is inspiration for innovation perceived in natural processes and phenomena. The authors of the paper present new algorithms for the defuzzyfication block which is the final process of the fuzzy controller (fuzzy control systems), for which a defuzzyfied value controls a given object. The presented new methods are: Golden Ratio (GR) and Mandala Factor (MF). The first of them uses the ancient Golden Ratio rule which is known, among others, from the Fibonacci sequence. The second proposal is based on the interpretation of drawing technique used in Asia, consisting in arranging pictures of color sand grains. In Tibetan Buddhism this technique is known as Mandala, a symbol of perfection and harmony. The interpretation of the perfection symbol and the Golden Ratio method in this paper has been referenced to other methods used in the defuzzyfication process, including weight averange method, centroid and mean of maxima. The scene for solutions presented here is provided by the ordered fuzzy numbers (OFN) theory which allows to use both the trend of a given phenomena, as well as more precisely wield mathematical methods. A special property of the proposed methods is their sensitivity to the OFN number order. This means that MF and GR operators applied to the numbers of the same shape but of opposite orders would result in different defuzzyfication values. The paragraph relating to discussion on the research includes a comparison of the existing defuzzyfication operators as regards the sensitivity to order. Keywords:Fuzzy logic, Ordered fuzzy numbers, OFN, Defuzzification Affiliations:
 
2.  Czerniak J.M.^{♦}, Zarzycki H.^{♦}, Dobrosielski W.T.^{♦}, Szczepański J., Application of OFN Notation in the Fuzzy Observation of WIG20 Index Trend for the Period 20082016, BOS2016/IWIFSGN2016, 14th National Conference of Operational and Systems Research (BOS) / 15th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets (IWIFSGN), 20161012/1014, Warszawa (PL), DOI: 10.1007/9783319655451_18, Vol.559, pp.190201, 2018 Abstract: The article concerns the issue of seeking patterns in trends. In the study a method to detect patterns in trends recorded in a linguistic has been proposed. Linguistic variables take their values as a result of the calculations in Ordered Fuzzy Numbers notation. Thus, in a first stage fuzzyfication of the source data occurs. Transposition of the parameters was applied to daily quotations (min, max, the opening value, the closing value and the direction of change) which were interpreted as a single OFN number. This is the first usage of this notation to describe the stock index which allows to describe five different parameters in a single number. Then the data are converted into linguistic form. The level of trend sequence similarity is determined by following set parameters: the Frame size of the pattern, expressed as a percentage similarity of trend sequence to a frame set at the outset, threshold indicating how many trend fragments is consistent with the frame and the frequency of the pattern occurrences. Patterns detected in this way in the nature of things are characterized by various support, and coefficients of similarity in both the whole pattern and the individual elements. For the purposes of this study, we developed a dedicated computer program performing patterns search. As research material the main index of the Warsaw Stock Exchange, i.e. WIG20 from the years 20082016 was used as a data set. This preliminary study is beginning to develop Rule base forecasting methods, and in this direction further experiments will be carried out. Keywords:WIG20, OFN, Trend, Rulebased forecasting, Pattern recognition, Financial engineering Affiliations:
 
3.  Dobrosielski W.T.^{♦}, Czerniak J.M.^{♦}, Szczepański J., Zarzycki H.^{♦}, Triangular Expanding, A New Defuzzification Method on Ordered Fuzzy Numbers, Advances in Intelligent Systems and Computing, ISSN: 21945357, DOI: 10.1007/9783319668307_54, Vol.641, pp.605619, 2018 Abstract: At the beginning of the article the authors describe a new trend in the artificial intelligence, associated with fuzzy sets and the accompanying derivative solutions which may include LR numbers by Dubois and Prade. On their basis the redefined theory has started a new trend in the form of ordered fuzzy numbers (OFN). Main features of ordered fuzzy numbers further in this article. Due to the nature of this article, which is related to the proposed defuzzyfication method, the authors mentions a fuzzy controller model and in particular the defuzzyfication process. Criteria used for conventional solutions of fuzzy numbers and ordered fuzzy numbers were also presented. In the further part of the article the defuzzyfication method called Triangular Expanding was presented. The author compared it to the Geometrical Mean method introduced earlier, which inspired his solution. Results of comparison with other methods such as FOM, LOM, COG were presented in the paper as well. The summary including conclusions and directions of further research were provided at the end. Keywords:Fuzzy logic, Ordered fuzzy numbers Affiliations:
 
4.  Dobrosielski W.T.^{♦}, Zarzycki H.^{♦}, Czerniak J.M.^{♦}, Szczepański J., New fuzzy numbers comparison operators in energy effectiveness simulation and modeling systems, ECMS 2018, 32nd EUROPEAN Conference on Modelling and Simulation, 20180522/0525, Wilhelmshaven (DE), DOI: 10.7148/20180454, pp.16, 2018 Abstract: Energy efficiency is often a key optimization problem. Many control systems use fuzzy logic and as a result applying compare operators to fuzzy numbers. The article deals with the issue of comparing fuzzy numbers. The similarity relation is most probably the most frequently used and the most difficult to precisely determine the convergence measure. Analysis of the similarity of two objects is a basic assessment tool and constitutes the basis for reasoning by analogy. It also directly affects the energy effectiveness of the universe that it controls. This article presents the methods for determining the similarity used in fuzzy logic. Many of these methods were dedicated only to fuzzy triangular or trapezoidal numbers (Dobrosielski et al. 2017, ChiTsuen Yeh 2017, Abbasbandy and Hajjar 2009). This was a computational inconvenience and posed a question about the axiological basis of this type of comparison. The authors proposed two new approaches for comparing fuzzy numbers using one of the known extensions of fuzzy numbers (Kacprzyk and Wilbik 2009, 2005). This allowed to simplify the operation and eliminate the duality (Zadrożny, 2004). Keywords:fuzzy logic, comparison, OFN, GR, ML, TR Affiliations:
 
5.  Dobrosielski W.T.^{♦}, Szczepański J., Zarzycki H.^{♦}, A Proposal for a Method of Defuzzification Based on the Golden RatioGR, IWIFSGN2015, 14th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets, 20151026/1028, Kraków (PL), DOI: 10.1007/9783319262116_7, Vol.401, pp.7584, 2016 Abstract: This article presents a proposal for a new method of defuzzification a fuzzy controller, which is based on the concept of the golden ratio, derived from the Fibonacci series [1]. The origin of the method was the observation of numerous instances of the golden ratio in such diverse fields as biology, architecture, medicine, and painting. A particular area of its occurrence is genetics, where we find the golden ratio in the very structure of the DNA molecule [2] (deoxyribonucleic acid molecules are 21 angstroms wide and 34 angstroms long for each full length of one double helix cycle). This fact makes the ratio in the Fibonacci series in some sense a universal design principle used by man and nature alike. In keeping with the requirements, the authors of the present study first explain the essential concepts of fuzzy logic, including in particular the notions of a fuzzy controller and a method of defuzzification. Then, they postulate the use of the golden ratio in the process of defuzzification and call the idea the Golden Ratio (GR) Method. In the subsequent part of the article, the proposed GRbased instrument is compared with the classical methods of defuzzification, including COG, FOM, and LOM. In the final part, the authors carry out numerous calculations and formulate conclusions which serve to classify the proposed method. At the end they present directions of further research. Keywords:Fuzzy logic, Fuzy sets, Fuzzy control system, Deffuzification, Fibonacci series Affiliations:

Conference abstracts
1.  Pręgowska A., Proniewska K.^{♦}, van Dam P.^{♦}, Dudek D.^{♦}, Szczepański J., Automatic arrhythmia detection form twochannel ambulatory ECG recordings using Shannon Information Theorybased algorithms, NFIC, 20th New Frontiers in Interventional Cardiology, 20191211/1213, Kraków (PL), pp.9, 2019  
2.  Proniewska K.^{♦}, Pręgowska A., van Dam P.^{♦}, Szczepański J., Automated ECG and acoustic signal based diagnosis of sleep disorders, NFIC 2018, 19th Interventional Cardiology Workshop New Frontiers in Interventional Cardiology, 20181206/1206, Kraków (PL), pp.1011, 2018  
3.  Paprocki B.^{♦}, Pręgowska A., Szczepański J., Information Processing in BrainInspired Networks: Size and Density Effects, SolMech 2016, 40th Solid Mechanics Conference, 20160829/0902, Warszawa (PL), No.P192, pp.12, 2016  
4.  Szczepański J., SanchezVives M.V.^{♦}, Arnold M.M.^{♦}, Montejo N.^{♦}, Paprocki B.^{♦}, Pręgowska A., Amigó J.M.^{♦}, Wajnryb E., Analyzing Neuroscience Signals using Information Theory and Complexity Shannon Communication Approach, 12th INCF, 12th INCF Workshop on Node Communication and Collaborative Neuroinformatics, 20150416/0417, Warszawa (PL), pp.132, 2015  
5.  Szczepański J., Paprocki B.^{♦}, Transmission efficiency in the brainlike neuronal networks. Information and energetic aspects, 10th International Neural Coding Workshop, 20120902/0908, Prague (CZ), pp.127128, 2012 Keywords: Neuronal Communication, Brainlike Network, Shannon Theory Affiliations:
 
6.  Paprocki B.^{♦}, Szczepański J., Effectiveness of information transmission in the brainlike communication models, 10th International Neural Coding Workshop, 20120902/0908, Prague (CZ), pp.9394, 2012 Keywords: Brainlike network, Information transmission, Neuronal computation Affiliations:
