Prof. Janusz Szczepański, Ph.D., Dr. Habil. 

Doctoral thesis
1985  Równanie Liouville'a w nieskończenie wymiarowej ośrodkowej przestrzeni Hilberta
 416 
Habilitation thesis
20070614  Zastosowanie układów dynamicznych w kryptografii 
Professor
20140728  Title of professor 
Supervision of doctoral theses
1.  20151204  Paprocki Bartosz (UKW)  Analiza wydajności transmisji danych w komórkach i sieciach neuronowych metodami Teorii Informacji  668  
2.  20121129  Chmielowiec Andrzej  Wydajne metody generowania bezpiecznych parametrów algorytmów klucza publicznego  654 
Recent publications
1.  Pręgowska A., Kaplan E.^{♦}, Szczepański J., How Far can Neural Correlations Reduce Uncertainty? Comparison of Information Transmission Rates for Markov and Bernoulli Processes, International Journal of Neural Systems, ISSN: 01290657, DOI: 10.1142/S0129065719500035, Vol.29, No.8, pp.1950003113, 2019 Abstract: The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the firing rates of individual spikes (rate code) or by the precise timing of every spike (temporal code)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon’s definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal code) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code). In a previous paper we found that a crucial role in the relation between information transmission rates (ITRs) and firing rates is played by a parameter s, which is the sum of transition probabilities from the nospike state to the spike state and vice versa. We found that in this case too a crucial role is played by the same parameter s. We calculated the maximal and minimal bounds of the quotient of ITRs for these sources. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source compared with the information in the corresponding Bernoulli source for a given word length. Our results show that in the case of correlated signals the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments. Keywords:Shannon information theory, information source, information transmission rate, firing rate, neural coding Affiliations:
 
2.  Pręgowska A., Proniewska K.^{♦}, van Dam P.^{♦}, Szczepański J., Using LempelZiv complexity as effective classification tool of the sleeprelated breathing disorders, Computer Methods and Programs in Biomedicine, ISSN: 01692607, DOI: 10.1016/j.cmpb.2019.105052, Vol.182, pp.10505217, 2019 Abstract: Background and objective Information theory, LempelZiv complexity, Entropy, ECG, Sleeprelated breathing disorders, Randomness Affiliations:
 
3.  Błoński S., Pręgowska A., Michalek T., Szczepański J., The use of LempelZiv complexity to analyze turbulence and flow randomness based on velocity fluctuations, BULLETIN OF THE POLISH ACADEMY OF SCIENCES: TECHNICAL SCIENCES, ISSN: 02397528, DOI: 10.24425/bpasts.2019.130876, Vol.67, No.5, pp.957962, 2019 Abstract: One of the mathematical tools to measure the generation rate of new patterns along a sequence of symbols is the LempelZiv complexity (LZ). Under additional assumptions, LZ is an estimator of entropy in the Shannon sense. Since entropy is considered as a measure of randomness, this means that LZ can be treated also as a randomness indicator. In this paper, we used LZ concept to the analysis of different flow regimes in cold flow combustor models. Experimental data for two combustor’s configurations motivated by efficient mixing need were considered. Extensive computer analysis was applied to develop a complexity approach to the analysis of velocity fluctuations recorded with hotwire anemometry and PIV technique. A natural encoding method to address these velocity fluctuations was proposed. It turned out, that with this encoding the complexity values of the sequences are well correlated with the values obtained by means of RMS method (larger/smaller complexity larger/smaller RMS). However, our calculations pointed out the interesting result that most complex, this means most random, behavior does not overlap with the “most turbulent” point determined by the RMS method, but it is located in the point with maximal average velocity. It seems that complexity method can be particularly useful to analyze turbulent and unsteady flow regimes. Moreover, the complexity can also be used to establish other flow characteristics like its ergodicity or mixing. Keywords:turbulence, complexity, entropy, randomness Affiliations:
 
4.  Pręgowska A., Casti A.^{♦}, Kaplan E.^{♦}, Wajnryb E., Szczepański J., Information processing in the LGN: a comparison of neural codes and cell types, BIOLOGICAL CYBERNETICS, ISSN: 03401200, DOI: 10.1007/s00422019008010, Vol.113, No.4, pp.453464, 2019 Abstract: To understand how anatomy and physiology allow an organism to perform its function, it is important to know how information that is transmitted by spikes in the brain is received and encoded. A natural question is whether the spike rate alone encodes the information about a stimulus (rate code), or additional information is contained in the temporal pattern of the spikes (temporal code). Here we address this question using data from the cat Lateral Geniculate Nucleus (LGN), which is the visual portion of the thalamus, through which visual information from the retina is communicated to the visual cortex. We analyzed the responses of LGN neurons to spatially homogeneous spots of various sizes with temporally random luminance modulation. We compared the Firing Rate with the Shannon Information Transmission Rate , which quantifies the information contained in the temporal relationships between spikes. We found that the behavior of these two rates can differ quantitatively. This suggests that the energy used for spiking does not translate directly into the information to be transmitted. We also compared Firing Rates with Information Rates for XON and XOFF cells. We found that, for XON cells the Firing Rate and Information Rate often behave in a completely different way, while for XOFF cells these rates are much more highly correlated. Our results suggest that for XON cells a more efficient “temporal code” is employed, while for XOFF cells a straightforward “rate code” is used, which is more reliable and is correlated with energy consumption. Keywords:Shannon information theory, Cat LGN, ON–OFF cells, Neural coding, Entropy, Firing rate Affiliations:
 
5.  Pręgowska A., Szczepański J., Wajnryb E., Temporal code versus rate code for binary Information Sources, NEUROCOMPUTING, ISSN: 09252312, DOI: 10.1016/j.neucom.2016.08.034, Vol.216, pp.756762, 2016 Abstract: Neuroscientists formulate very different hypotheses about the nature of neural coding. At one extreme, it has been argued that neurons encode information through relatively slow changes in the arrival rates of individual spikes (rate codes) and that the irregularity in the spike trains reflects the noise in the system. At the other extreme, this irregularity is the code itself (temporal codes) so that the precise timing of every spike carries additional information about the input. It is well known that in the estimation of Shannon Information Transmission Rate, the patterns and temporal structures are taken into account, while the “rate code” is already determined by the firing rate, i.e. by the spike frequency. In this paper we compare these two types of codes for binary Information Sources, which model encoded spike trains. Assuming that the information transmitted by a neuron is governed by an uncorrelated stochastic process or by a process with a memory, we compare the Information Transmission Rates carried by such spike trains with their firing rates. Here we show that a crucial role in the relation between information transmission and firing rates is played by a factor that we call the “jumping” parameter. This parameter corresponds to the probability of transitions from the nospikestate to the spikestate and vice versa. For low jumping parameter values, the quotient of information and firing rates is a monotonically decreasing function of the firing rate, and there therefore a straightforward, onetoone, relation between temporal and rate codes. However, it turns out that for large enough values of the jumping parameter this quotient is a nonmonotonic function of the firing rate and it exhibits a global maximum, so that in this case there is an optimal firing rate. Moreover, there is no onetoone relation between information and firing rates, so the temporal and rate codes differ qualitatively. This leads to the observation that the behavior of the quotient of information and firing rates for a large jumping parameter value is especially important in the context of bursting phenomena. Keywords:Information Theory, Information Source, Stochastic process, Information transmission rate, Firing rate Affiliations:
 
6.  Pręgowska A., Szczepański J., Wajnryb E., Mutual information against correlations in binary communication channels, BMC NEUROSCIENCE, ISSN: 14712202, DOI: 10.1186/s1286801501680, Vol.16, No.32, pp.17, 2015 Abstract: Background Shannon information, Communication channel, Entropy, Mutual information, Correlation, Neuronal encoding Affiliations:
 
7.  Arnold M.M.^{♦}, Szczepański J., Montejo N.^{♦}, Amigó J.M.^{♦}, Wajnryb E., SanchezVives M.V.^{♦}, Information content in cortical spike trains during brain state transitions, JOURNAL OF SLEEP RESEARCH, ISSN: 09621105, DOI: 10.1111/j.13652869.2012.01031.x, Vol.22, pp.1321, 2013 Abstract: Even in the absence of external stimuli there is ongoing activity in the cerebral cortex as a result of recurrent connectivity. This paper attempts to characterize one aspect of this ongoing activity by examining how the information content carried by specific neurons varies as a function of brain state. We recorded from rats chronically implanted with tetrodes in the primary visual cortex during awake and sleep periods. Electroencephalogram and spike trains were recorded during 30min periods, and 2–4 neuronal spikes were isolated per tetrode offline. All the activity included in the analysis was spontaneous, being recorded from the visual cortex in the absence of visual stimuli. The brain state was determined through a combination of behavior evaluation, electroencephalogram and electromyogram analysis. Information in the spike trains was determined by using Lempel–Ziv Complexity. Complexity was used to estimate the entropy of neural discharges and thus the information content (Amigóet al. Neural Comput., 2004, 16: 717–736). The information content in spike trains (range 4–70 bits s−1) was evaluated during different brain states and particularly during the transition periods. Transitions toward states of deeper sleep coincided with a decrease of information, while transitions to the awake state resulted in an increase in information. Changes in both directions were of the same magnitude, about 30%. Information in spike trains showed a high temporal correlation between neurons, reinforcing the idea of the impact of the brain state in the information content of spike trains. Keywords:awake, brain states, entropy, firing rate, information, sleep, spike train Affiliations:
 
8.  Paprocki B.^{♦}, Szczepański J., How do the amplitude fluctuations affect the neuronal transmission efficiency, NEUROCOMPUTING, ISSN: 09252312, DOI: 10.1016/j.neucom.2012.11.001, Vol.104, pp.5056, 2013 Abstract: Tremendous effort is done to understand the nature of neuronal coding, its high efficiency and mechanisms governing it. Paprocki and Szczepanski [13] explored the model of neuron proposed by Levy and Baxter [12] and analyzed the efficiency with respect to the synaptic failure, activation threshold, firing rate and type of the input source. In this paper we study influence of the amplitude fluctuations (damping, uniform and amplifying), another important component in neuronal computations. The efficiency is understood in the sense of mutual information (between input and output signals), which constitutes the fundamental concept of the Shannon communication theory. Using high quality entropy estimators we determined maximal values of the mutual information between input and output neuronal signals for simple neuronal architecture. We observed that this maximal efficiency remains nearly constant, almost regardless of the fluctuation type. We have also found that for wide range of thresholds, both for damping and amplifying fluctuations, the mutual information behaves in an opposite way to the corresponding correlations between input and output signals. These calculations confirm that the neuronal coding is much more subtle than the straightforward intuitive optimization of input–output correlations. Keywords:Neural computation, Mutual information, Amplitude fluctuation, Activation threshold, Synaptic failure, Entropy estimation Affiliations:
 
9.  Paprocki B.^{♦}, Szczepański J., Transmission efficiency in ring, brain inspired neuronal networks. Informationand energetic aspects, Brain Research, ISSN: 00068993, DOI: 10.1016/j.brainres.2013.07.024, Vol.1536, pp.135143, 2013 Abstract: Organisms often evolve as compromises, and many of these compromises can be expressed in terms of energy efficiency. Thus, many authors analyze energetic costs processes during information transmission in the brain. In this paper we study information transmission rate per energy used in a class of ring, brain inspired neural networks, which we assume to involve components like excitatory and inhibitory neurons or longrange connections. Choosing model of neuron we followed a probabilistic approach proposed by Levy and Baxter (2002), which contains all essential qualitative mechanisms participating in the transmission process and provides results consistent with physiologically observed values. Information transmission efficiency, Mutual information, Brain inspired network, Inhibitory neuron, Longrange connection, Neuronal computation Affiliations:
 
10.  Paprocki B.^{♦}, Szczepański J., Kołbuk D., Information transmission efficiency in neuronal communication systems, BMC NEUROSCIENCE, ISSN: 14712202, DOI: 10.1186/1471220214S1P217, Vol.14(Suppl 1), No.P217, pp.12, 2013 Abstract: The nature and efficiency of brain transmission processes, its high reliability and efficiency is one of the most elusive area of contemporary science [1]. We study information transmission efficiency by considering a neuronal communication as a Shannontype channel. Thus, using high quality entropy estimators, we evaluate the mutual information between input and output signals. We assume model of neuron proposed by Levy and Baxter [2], which incorporates all essential qualitative mechanisms participating in neural transmission process. Keywords:transmission efficiency, neuronal communication, Shannontype channe Affiliations:
 
11.  Szczepański J., Arnold M.^{♦}, Wajnryb E., Amigó J.M.^{♦}, SanchezVives M.V.^{♦}, Mutual information and redundancy in spontaneous communication between cortical neurons, BIOLOGICAL CYBERNETICS, ISSN: 03401200, DOI: 10.1007/s004220110425y, Vol.104, pp.161174, 2011 Abstract: An important question in neural information processing is how neurons cooperate to transmit information. To study this question, we resort to the concept of redundancy in the information transmitted by a group of neurons and, at the same time, we introduce a novel concept for measuring cooperation between pairs of neurons called relative mutual information (RMI). Specifically, we studied these two parameters for spike trains generated by neighboring neurons from the primary visual cortex in the awake, freely moving rat. The spike trains studied here were spontaneously generated in the cortical network, in the absence of visual stimulation. Under these conditions, our analysis revealed that while the value of RMI oscillated slightly around an average value, the redundancy exhibited a behavior characterized by a higher variability. We conjecture that this combination of approximately constant RMI and greater variable redundancy makes information transmission more resistant to noise disturbances. Furthermore, the redundancy values suggest that neurons can cooperate in a flexible way during information transmission. This mostly occurs via a leading neuron with higher transmission rate or, less frequently, through the information rate of the whole group being higher than the sum of the individual information rates—in other words in a synergetic manner. The proposed method applies not only to the stationary, but also to locally stationary neural signals. Keywords:Neurons, Shannon information, Entropy, Mutual information, Redundancy, Visual cortex, Spikes train, Spontaneous activity Affiliations:
 
12.  Paprocki B.^{♦}, Szczepański J., Efficiency of neural transmission as a function of synaptic noise, threshold, and source characteristics, BIOSYSTEMS, ISSN: 03032647, DOI: 10.1016/j.biosystems.2011.03.005, Vol.105, pp.6272, 2011 Abstract: There has been a growing interest in the estimation of information carried by a single neuron and multiple single units or population of neurons to specific stimuli. In this paper we analyze, inspired by article of Levy and Baxter (2002), the efficiency of a neuronal communication by considering dendrosomatic summation as a Shannontype channel (1948) and by considering such uncertain synaptic transmission as part of the dendrosomatic computation. Specifically, we study Mutual Information between input and output signals for different types of neuronal network architectures by applying efficient entropy estimators. We analyze the influence of the following quantities affecting transmission abilities of neurons: synaptic failure, activation threshold, firing rate and type of the input source. We observed a number of surprising nonintuitive effects. It turns out that, especially for lower activation thresholds, significant synaptic noise can lead even to twofold increase of the transmission efficiency. Moreover, the efficiency turns out to be a nonmonotonic function of the activation threshold. We find a universal value of threshold for which a local maximum of Mutual Information is achieved for most of the neuronal architectures, regardless of the type of the source (correlated and noncorrelated). Additionally, to reach the global maximum the optimal firing rates must increase with the threshold. This effect is particularly visible for lower firing rates. For higher firing rates the influence of synaptic noise on the transmission efficiency is more advantageous. Noise is an inherent component of communication in biological systems, hence, based on our analysis, we conjecture that the neuronal architecture was adjusted to make more effective use of this attribute. Keywords:Neuronal computation, Entropy, Mutual Information, Estimators, Neuron, Quantal failure, Activation threshold, Neural network Affiliations:
 
13.  Szczepański J., On the distribution function of the complexity of finite sequences, INFORMATION SCIENCES, ISSN: 00200255, DOI: 10.1016/j.ins.2008.12.019, Vol.179, pp.12171220, 2009 Abstract: Investigations of complexity of sequences lead to important applications such as effective data compression, testing of randomness, discriminating between information sources and many others. In this paper we establish formulae describing the distribution functions of random variables representing the complexity of finite sequences introduced by Lempel and Ziv in 1976. It is known that this quantity can be used as an estimator of entropy. We show that the distribution functions depend affinely on the probabilities of the socalled ‘‘exact” sequences. Keywords:Lempel–Ziv complexity, Distribution function, Randomness Affiliations:
 
14.  Nagarajan R.^{♦}, Szczepański J., Wajnryb E., Interpreting nonrandom signatures in biomedical signals with LempelZiv complexity, PHYSICA DNONLINEAR PHENOMENA, ISSN: 01672789, DOI: 10.1016/j.physd.2007.09.007, Vol.237, pp.359364, 2008 Abstract: Lempel–Ziv complexity (LZ) [J. Ziv, A. Lempel, On the complexity of finite sequences, IEEE Trans. Inform. Theory 22 (1976) 75–81] and its variants have been used widely to identify nonrandom patterns in biomedical signals obtained across distinct physiological states. Nonrandom signatures of the complexity measure can occur under nonlinear deterministic as well as nondeterministic settings. Surrogate data testing have also been encouraged in the past in conjunction with complexity estimates to make a finer distinction between various classes of processes. In this brief letter, we make two important observations (1) NonGaussian noise at the dynamical level can elude existing surrogate algorithms namely: Phaserandomized surrogates (FT) amplitudeadjusted Fourier transform (AAFT) and iterated amplitudeadjusted Fourier transform (IAAFT). Thus any inference nonlinear determinism as an explanation for the nonrandomness is incomplete (2) Decrease in complexity can be observed even across two linear processes with identical autocorrelation functions. The results are illustrated with a secondorder autoregressive process with Gaussian and nonGaussian innovations. AR(2) processes have been used widely to model several physiological phenomena, hence their choice. The results presented encou rage cautious interpretation of nonrandom signatures in experimental signals using complexity measures. Keywords:Lempel–Ziv complexity, Surrogate testing, Autoregressive process Affiliations:
 
15.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., On some properties of the discrete Lyapunov exponent, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2008.07.076, Vol.372, pp.62656268, 2008 Abstract: One of the possible byproducts of discrete chaos is the application of its tools, in particular of the discrete Lyapunov exponent, to cryptography. In this Letter we explore this question in a very general setting. Affiliations:
 
16.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., Theory and practice of chaotic cryptography, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2007.02.021, Vol.366, pp.211216, 2007 Abstract: In this Letter we address some basic questions about chaotic cryptography, not least the very definition of chaos in discrete systems. We propose a conceptual framework and illustrate it with different examples from private and public key cryptography. We elaborate also on possible limits of chaotic cryptography. Affiliations:
 
17.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., Discrete Lyapunov exponent and resistance to differential cryptanalysis, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS IIEXPRESS BRIEFS, ISSN: 15497747, DOI: 10.1109/TCSII.2007.901576, Vol.54, No.10, pp.882886, 2007 Abstract: In a recent paper, Jakimoski and Subbalakshmi provided a nice connection between the socalled discrete Lyapunov exponent of a permutation F defined on a finite lattice and its maximal differential probability, a parameter that measures the complexity of a differential cryptanalysis attack on the substitution defined by F. In this brief, we take a second look at their result to find some practical shortcomings. We also discuss more general aspects. Keywords:Differential cryptanalysis, discrete Lyapunov exponent (DLE), maximum differential probability (DP) Affiliations:
 
18.  Kocarev L.^{♦}, Szczepański J., Amigó J.M.^{♦}, Tomovski I.^{♦}, Discrete chaos  I: theory, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS IREGULAR PAPERS, ISSN: 15498328, DOI: 10.1109/TCSI.2006.874181, Vol.53, No.6, pp.13001309, 2006 Abstract: We propose a definition of the discrete Lyapunov exponent for an arbitrary permutation of a finite lattice. For discretetime dynamical systems, it measures the local (between neighboring points) average spreading of the system. We justify our definition by proving that, for large classes of chaotic maps, the corresponding discrete Lyapunov exponent approaches the largest Lyapunov exponent of a chaotic map when Mrarrinfin, where M is the cardinality of the discrete phase space. In analogy with continuous systems, we say the system has discrete chaos if its discrete Lyapunov exponent tends to a positive number, when Mrarrinfin. We present several examples to illustrate the concepts being introduced. Keywords:Chaos, discrete chaos, Lyapunov components Affiliations:
 
19.  Amigó J.M.^{♦}, Kocarev L.^{♦}, Szczepański J., Order patterns and chaos, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2006.01.093, Vol.355, pp.2731, 2006 Abstract: Chaotic maps can mimic random behavior in a quite impressive way. In particular, those possessing a generating partition can produce any symbolic sequence by properly choosing the initial state. We study in this Letter the ability of chaotic maps to generate order patterns and come to the conclusion that their performance in this respect falls short of expectations. This result reveals some basic limitation of a deterministic dynamic as compared to a random one. This being the case, we propose a nonstatistical test based on ‘forbidden’ order patterns to discriminate chaotic from truly random time series with, in principle, arbitrarily high probability. Some relations with discrete chaos and chaotic cryptography are also discussed. Keywords:Chaotic maps, Order patterns, Permutation entropy, Discrete Lyapunov exponent, Chaotic cryptography Affiliations:
 
20.  Szczepański J., Amigó J.M.^{♦}, Michałek T., Kocarev L.^{♦}, Cryptographically secure substitutions based on the approximation of mixing maps, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS IREGULAR PAPERS, ISSN: 15498328, DOI: 10.1109/TCSI.2004.841602, Vol.52, No.2, pp.443453, 2005 Abstract: In this paper, we explore, following Shannon’s suggestion that diffusion should be one of the ingredients of resistant block ciphers, the feasibility of designing cryptographically secure substitutions (think of Sboxes, say) via approximation of mixing maps by periodic transformations. The expectation behind this approach is, of course, that the nice diffusion properties of such maps will be inherited by their approximations, at least if the convergence rate is appropriate and the associated partitions are sufficiently fine. Our results show that this is indeed the case and that, in principle, block ciphers with closetooptimal immunity to linear and differential cryptanalysis (as measured by the linear and differential approximation probabilities) can be designed along these guidelines. We provide also practical examples and numerical evidence for this approximation philosophy. Keywords:Black cipher, differential cryptanalysis, linear cryptanalysis, mixing dynamical system, periodic approximation, S box Affiliations:
 
21.  Amigó J.M.^{♦}, Szczepański J., Kocarev L.^{♦}, A chaosbased approach to the design of cryptographically secure substitutions, PHYSICS LETTERS A, ISSN: 03759601, DOI: 10.1016/j.physleta.2005.05.057, Vol.343, pp.5560, 2005 Abstract: We show that chaotic maps may be used for designing socalled substitution boxes for ciphers resistant to linear and differential cryptanalysis, providing an alternative to the algebraic methods. Our approach is based on the approximation of mixing maps by periodic transformations. The expectation behind is, of course, that the nice chaotic properties of such maps will be inherited by their approximations, at least if the convergence rate is appropriate and the associated partitions are sufficiently fine. We show that this is indeed the case and that, in principle, substitutions with closetooptimal immunity to linear and differential cryptanalysis can be designed along these guidelines. We provide also practical examples and numerical evidence for this approximation philosophy Keywords:Chaotic maps, Periodic approximations, Bit permutations, Cryptanalysis Affiliations:
 
22.  Amigó J.M.^{♦}, Szczepański J., A Conceptual Guide to Chaos Theory, Prace IPPT  IFTR Reports, ISSN: 22993657, No.9, pp.143, 1999 
List of chapters in recent monographs
1. 555  Dobrosielski W.T.^{♦}, Czerniak J.M.^{♦}, Zarzycki H.^{♦}, Szczepański J., Theory and Applications of Ordered Fuzzy Numbers. Studies in Fuzziness and Soft Computing, rozdział: Fuzzy Numbers Applied to a Heat Furnace Control, Springer, 356, pp.269288, 2017 
Conference papers
1.  Dobrosielski W.T.^{♦}, Czerniak J.M.^{♦}, Szczepański J., Zarzycki H.^{♦}, Two New Defuzzification Methods Useful for Different Fuzzy Arithmetics, BOS2016/IWIFSGN2016, 14th National Conference of Operational and Systems Research (BOS) / 15th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets (IWIFSGN), 20161012/1014, Warszawa (PL), DOI: 10.1007/9783319655451_9, Vol.559, pp.83101, 2018 Abstract: One of the many reasons why a human searches for new solutions is inspiration for innovation perceived in natural processes and phenomena. The authors of the paper present new algorithms for the defuzzyfication block which is the final process of the fuzzy controller (fuzzy control systems), for which a defuzzyfied value controls a given object. The presented new methods are: Golden Ratio (GR) and Mandala Factor (MF). The first of them uses the ancient Golden Ratio rule which is known, among others, from the Fibonacci sequence. The second proposal is based on the interpretation of drawing technique used in Asia, consisting in arranging pictures of color sand grains. In Tibetan Buddhism this technique is known as Mandala, a symbol of perfection and harmony. The interpretation of the perfection symbol and the Golden Ratio method in this paper has been referenced to other methods used in the defuzzyfication process, including weight averange method, centroid and mean of maxima. The scene for solutions presented here is provided by the ordered fuzzy numbers (OFN) theory which allows to use both the trend of a given phenomena, as well as more precisely wield mathematical methods. A special property of the proposed methods is their sensitivity to the OFN number order. This means that MF and GR operators applied to the numbers of the same shape but of opposite orders would result in different defuzzyfication values. The paragraph relating to discussion on the research includes a comparison of the existing defuzzyfication operators as regards the sensitivity to order. Keywords:Fuzzy logic, Ordered fuzzy numbers, OFN, Defuzzification Affiliations:
 
2.  Czerniak J.M.^{♦}, Zarzycki H.^{♦}, Dobrosielski W.T.^{♦}, Szczepański J., Application of OFN Notation in the Fuzzy Observation of WIG20 Index Trend for the Period 20082016, BOS2016/IWIFSGN2016, 14th National Conference of Operational and Systems Research (BOS) / 15th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets (IWIFSGN), 20161012/1014, Warszawa (PL), DOI: 10.1007/9783319655451_18, Vol.559, pp.190201, 2018 Abstract: The article concerns the issue of seeking patterns in trends. In the study a method to detect patterns in trends recorded in a linguistic has been proposed. Linguistic variables take their values as a result of the calculations in Ordered Fuzzy Numbers notation. Thus, in a first stage fuzzyfication of the source data occurs. Transposition of the parameters was applied to daily quotations (min, max, the opening value, the closing value and the direction of change) which were interpreted as a single OFN number. This is the first usage of this notation to describe the stock index which allows to describe five different parameters in a single number. Then the data are converted into linguistic form. The level of trend sequence similarity is determined by following set parameters: the Frame size of the pattern, expressed as a percentage similarity of trend sequence to a frame set at the outset, threshold indicating how many trend fragments is consistent with the frame and the frequency of the pattern occurrences. Patterns detected in this way in the nature of things are characterized by various support, and coefficients of similarity in both the whole pattern and the individual elements. For the purposes of this study, we developed a dedicated computer program performing patterns search. As research material the main index of the Warsaw Stock Exchange, i.e. WIG20 from the years 20082016 was used as a data set. This preliminary study is beginning to develop Rule base forecasting methods, and in this direction further experiments will be carried out. Keywords:WIG20, OFN, Trend, Rulebased forecasting, Pattern recognition, Financial engineering Affiliations:
 
3.  Dobrosielski W.T.^{♦}, Czerniak J.M.^{♦}, Szczepański J., Zarzycki H.^{♦}, Triangular Expanding, A New Defuzzification Method on Ordered Fuzzy Numbers, Advances in Intelligent Systems and Computing, ISSN: 21945357, DOI: 10.1007/9783319668307_54, Vol.641, pp.605619, 2018 Abstract: At the beginning of the article the authors describe a new trend in the artificial intelligence, associated with fuzzy sets and the accompanying derivative solutions which may include LR numbers by Dubois and Prade. On their basis the redefined theory has started a new trend in the form of ordered fuzzy numbers (OFN). Main features of ordered fuzzy numbers further in this article. Due to the nature of this article, which is related to the proposed defuzzyfication method, the authors mentions a fuzzy controller model and in particular the defuzzyfication process. Criteria used for conventional solutions of fuzzy numbers and ordered fuzzy numbers were also presented. In the further part of the article the defuzzyfication method called Triangular Expanding was presented. The author compared it to the Geometrical Mean method introduced earlier, which inspired his solution. Results of comparison with other methods such as FOM, LOM, COG were presented in the paper as well. The summary including conclusions and directions of further research were provided at the end. Keywords:Fuzzy logic, Ordered fuzzy numbers Affiliations:
 
4.  Dobrosielski W.T.^{♦}, Szczepański J., Zarzycki H.^{♦}, A Proposal for a Method of Defuzzification Based on the Golden RatioGR, IWIFSGN2015, 14th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets, 20151026/1028, Kraków (PL), DOI: 10.1007/9783319262116_7, Vol.401, pp.7584, 2016 Abstract: This article presents a proposal for a new method of defuzzification a fuzzy controller, which is based on the concept of the golden ratio, derived from the Fibonacci series [1]. The origin of the method was the observation of numerous instances of the golden ratio in such diverse fields as biology, architecture, medicine, and painting. A particular area of its occurrence is genetics, where we find the golden ratio in the very structure of the DNA molecule [2] (deoxyribonucleic acid molecules are 21 angstroms wide and 34 angstroms long for each full length of one double helix cycle). This fact makes the ratio in the Fibonacci series in some sense a universal design principle used by man and nature alike. In keeping with the requirements, the authors of the present study first explain the essential concepts of fuzzy logic, including in particular the notions of a fuzzy controller and a method of defuzzification. Then, they postulate the use of the golden ratio in the process of defuzzification and call the idea the Golden Ratio (GR) Method. In the subsequent part of the article, the proposed GRbased instrument is compared with the classical methods of defuzzification, including COG, FOM, and LOM. In the final part, the authors carry out numerous calculations and formulate conclusions which serve to classify the proposed method. At the end they present directions of further research. Keywords:Fuzzy logic, Fuzy sets, Fuzzy control system, Deffuzification, Fibonacci series Affiliations:

Conference abstracts
1.  Proniewska K.^{♦}, Pręgowska A., van Dam P.^{♦}, Szczepański J., Automated ECG and acoustic signal based diagnosis of sleep disorders, NFIC 2018, 19th Interventional Cardiology Workshop New Frontiers in Interventional Cardiology, 20181206/1206, Kraków (PL), pp.1011, 2018  
2.  Paprocki B.^{♦}, Pręgowska A., Szczepański J., Information Processing in BrainInspired Networks: Size and Density Effects, SolMech 2016, 40th Solid Mechanics Conference, 20160829/0902, Warszawa (PL), No.P192, pp.12, 2016  
3.  Szczepański J., SanchezVives M.V.^{♦}, Arnold M.M.^{♦}, Montejo N.^{♦}, Paprocki B.^{♦}, Pręgowska A., Amigó J.M.^{♦}, Wajnryb E., Analyzing Neuroscience Signals using Information Theory and Complexity Shannon Communication Approach, 12th INCF, 12th INCF Workshop on Node Communication and Collaborative Neuroinformatics, 20150416/0417, Warszawa (PL), pp.132, 2015  
4.  Szczepański J., Paprocki B.^{♦}, Transmission efficiency in the brainlike neuronal networks. Information and energetic aspects, 10th International Neural Coding Workshop, 20120902/0908, Prague (CZ), pp.127128, 2012 Keywords: Neuronal Communication, Brainlike Network, Shannon Theory Affiliations:
 
5.  Paprocki B.^{♦}, Szczepański J., Effectiveness of information transmission in the brainlike communication models, 10th International Neural Coding Workshop, 20120902/0908, Prague (CZ), pp.9394, 2012 Keywords: Brainlike network, Information transmission, Neuronal computation Affiliations:
