A novel approach for music genre identification using ZFNet, ELM, and modified electric eel foraging optimizer

After performing data preprocessing and acquiring spectrograms, the data is then subjected to the ZFNet/Extreme Learning Machine (ELM) approach for the goal of data modification, extraction of features, and recognition.
To extract the features of the music genre, a novel method is investigated in this study. The trained neural network ZFNet was first modified by music genre datasets. Since then, the residual layers adapted to ZFNet via replacing an Extreme Learning Machine were regulated. Lastly, the ELM’s extension abilities were enhanced via the MEEFO.
Zfnet
In 2013, ZFNet became the winner of the ILSVRC race, where investigators developed a method to imagine the taken-out properties in every single convolutional neural network layer. Then, they take advantage of it to represent the properties gained from employing AlexNet. Two issues are detected within AlexNet’s construction, and to resolve those issues, fluctuations are created in the design25,26. The first issue states that properties taken out via AlexNet in the second and first layers frequently contain high and low-frequency alters; moreover, there exist fewer mid-frequency properties within the layers. In order to resolve the present issue, the 1 st layer is altered from 11 × 11 to 7 × 7. The second issue has been connected to interfering with the taken-out properties within the 2nd layer that happened because of the convolution’s massive stage size; accordingly, the stage size was altered from 4 to 2. According to these variations, more various and distinguishing properties are gained using the ZFNet architecture, and network behavior is enhanced in comparison with AlexNet. Consequently, in the current study, a CNN using the ZFNet structure has been utilized.
Extreme learning machine
Functions, that utilize machine learning, frequently make wide utilization of one-hidden-layer NNs, for example, ELM. Initially, the input layer’s biases and weights are shaped at stochastic, and then, 2nd, the output layer’s weights and biases are calculated based on the stochastic amounts that were produced. Classical NN learning tactics possess a lesser ratio of activity and a sluggish learning pace. It has been supposed the neurons’ number within the layer of input has been signified via the \(\:n\), and the amounts of \(\:o\) and \(\:h\) are, in turn, the neurons’ amounts within the layer of output and the hidden. The cost value has been computed via the next formula:
$$\:\left_v\right\:=\:\sum\:_\left^v\right{Q}_{i}f({w}_{i}.{b}_{i}.{z}_{i})\:\:\:\:\:\:\:\:\:\:\:\:\:\:$$
(2)
where, \(\:{w}_{i}\:\)determined for the input connection’s weight, \(\:{b}_{i}\) illustrates the \(\:{i}^{th}\) concealed neuron’s biases, \(\:{z}_{j}\) illustrates the output connection’s weight, and \(\:{Y}_{j}\) for the ELM’s last output. The matrix form of Eq. (2) defines with Eq. (3).
here, \(\:{Y\:}^{T}\)refers to the alteration of the matrix \(\:Y\); moreover, \(\:H\) and \(\:Q\) are illustrated subsequently:
$$\:H=\:{\left[\begin{array}{c}f\left({w}_{1}.{b}_{1}.{z}_{1}\right)\:\:\:\:\:\:f\left({w}_{2}.{b}_{2}.{z}_{1}\right)\:\:\dots\:\:\:\:f\left({w}_{h}.{b}_{h}.{z}_{1}\right)\\\:\vdots\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\vdots\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\vdots\\\:f\left({w}_{1}.{b}_{1}.{z}_{\gamma\:}\right)\:\:\:\:\:\:f\left({w}_{2}.{b}_{2}.{z}_{\gamma\:}\right)\:\:\dots\:\:\:\:f\left({w}_{h}.{b}_{h}.{z}_{\gamma\:}\right)\end{array}\right]}_{\gamma\:\times\:h}$$
(4)
$$\:Q={[{Q}_{1}.{Q}_{2}.\cdots\:.{Q}_{l}]}^{T}$$
(5)
The foremost aim of ELM training is to reduce the errors’ number during the training process. To appropriately instrument a traditional ELM, weights, and biases of the input must be selected stochastically, and the function of activation needs to have the capability to be substantially distinguished27. The ELM’s training within the current technique manages within the weight of output (\(\:Q\)) being attained via making most of the function of least squares in Eq. (6), and the result might be defined with Eq. (7).
$$\:min\:Q\:\parallel\:HQ\:-\:{Y}^{T}\:\parallel$$
(6)
$$\:\widehat{Q}=\:{H}^{+}ZT$$
(7)
here, \(\:{H}^{+}\) Illustrates the general Moore-Penrose reverse of the \(\:H\) matrix.
Modified electric eel foraging optimization (MEEFO)
This part defines the motivation and main opinion, the mathematical model, the process, and the complications of EEFO.
Inspiration
Electrical eels have been known as the extraordinary hunters among all animals. Electric eels belong to the family Gymnotidae which is in South America and are famous due to their outstanding skill of discharge among river individuals. Mature individuals could produce a voltage between 300 and 800 for shock victims to consume it. Consequently, electrical individuals have been identified as “high voltage cables” within marine.
For producing electricity, individuals own 3 sets of electric organs comprising 1000 electrical-producing cells named electrolytes. The current electrolytes within the body act like mini batteries that stock power.
Eels frequently produce about 10 volts to direct and detect prey because of their limited ability to see. Eels take advantage of the reaction of these electrical signs to effectively pursue and exactly detect speeding victims. Furthermore, to protect in contrast to an opponent as an armament, the raised electrical charge is utilized and low electrical discharge is utilized aimed at interaction among individuals. Likewise, Eels could perceive and understand discharge data from additional eels.
Eels utilize electric charge as an extremely cutting-edge technique of interaction and protection. Whenever eels discover a target, the individuals release more considerable electrical charge quickly and shock the quarry. The present capability is a significantly efficacious food-seeking approach in the realm of biology. Novel research suggests that electric eels have a swarm-based behavior. eels similar to mammals, apply communal predation as the hunting technique.
This is because sets of eels could organize activities, counting interacting, resting, migrating, and hunting, to pursue victim and comportment communal predation. Whenever leading set hunting, eels have a tendency to gather, swim in loops, and corralling the school of fish into a “prey ball” before equally beginning a destructive high-voltage assault on the ball of the quarry. This set hunting manner upsurges the chance to attain a greater number of prey, particularly whenever there is plenty of fish. EEFO is considered based on the optimized behaviors.
Mathematical model and algorithm
The EEFO model includes the exploitation and exploration stages, which result from the social predation actions of electric eels, namely interaction, rest, migration, and hunting. The subsequent sections are the mathematical models for hunting behaviors.
Interacting
Once eels meet a school of fish, they interact with each other by swimming and roiling. Subsequently, the candidates commence swimming in a huge electrical ring to trick several minor individuals in the middle of the circle. Within EEFO, each electrical individual is a contender solution; additionally, the finest contender solutions gained up to now within the stages are regarded as the proposed quarry. The interaction designates every individual accommodatingly cooperates with other members with the data of the individuals’ situations. The present manner could be observed as the exploration stage. Exactly, an electrical individual could interrelate with all individuals stochastically selected from the individuals via utilizing the situation data of total members within the population. Renewing an individual’s situation contains contrasting the distinction between a stochastically designated individual and the individuals center.
Furthermore, an electrical individual could cooperate with several stochastically designated candidates in the population via employing the local data within the solution space. The candidate’s situation has been renewed via defining the alteration among a stochastically designated individual from the population as well as a candidate produced stochastically in the solution space. The communication among candidates is noticeable through a churn that signifies a stochastic motion within numerous orientates. The subsequent model signifies this churn.
$$\:A={n}_{1}\times\:C$$
(8)
$$\:{n}_{1}\sim\:N\left(\text{0,1}\right)$$
(9)
$$\:{c}_{1},{c}_{2},\dots\:,{c}_{k},\dots\:,{c}_{d}$$
(10)
$$\:c\left(k\right)=\left\{\begin{array}{c}1\:\:\:if\:\:\:\:k==g\{l\\\:0\:\:\:\:\:\:else\:\end{array}\right.$$
(11)
$$\:g=randperm\left(d\right)$$
(12)
$$\:l=1,\dots\:\:,\:\lceil\frac{T-t}{T}\times\:{r}_{1}\times\:\left(d-2\right)+2\rceil$$
(13)
Here the amount of the maximum iterations is defined by \(\:\text{T}\). The interacting behavior is determined as:
$$\:\left\{\begin{array}{c}\left\{\begin{array}{c}{v}_{i}\left(t+1\right)={z}_{j}\left(t\right)+A\times\:\left({-}{z}\left(t\right)-{z}_{i}\left(t\right)\right)\:\:{p}_{1}>0.5\\\:{v}_{i}\left(t+1\right)={z}_{j}\left(t\right)+A\times\:\left({z}_{r}\left(t\right)-{z}_{i}\left(t\right)\right)\:\:{p}_{1}\le\:0.5\end{array}\:\:\:\:\:fit\left({z}_{j}\left(t\right)\right)
(14)
$$\:\stackrel{-}{z}\left(t\right)=(1/n)\sum\:_{i=1}^{n}\left(t\right)$$
(15)
$$\:{z}_{r}=L+r\times\:(U-L)$$
(16)
In which, \(\:{p}_{1}\)and \(\:{p}_{2\:\:}\)are stochastic amounts that are in the interval (0, 1), \(\:fit\left(z\right(i\left)\right)\) illustrates the objective of the individual’s location of the \(\:ith\) electrical candidate, \(\:{z}_{j}\) has been found to be the situation of a candidate selected stochastically from the present population and \(\:j=i\), the population’s size is defined by \(\:n\), \(\:{r}_{1}\) is a stochastic amount in (0,1), \(\:r\) is the stochastic vector in (0,1), and, \(\:L\) and \(\:U\) are, in turn, defined as the lower and upper restrictions. Based on Eq. (29), the ability of electric eels to communicate permits them to navigate to several situations within the search space. This interaction plays a major role in exploring the full search space of EEFO.
Resting
The resting parts need to be recognized before electric eels accomplish resting behavior in EEFO. For enhancing the search productivity, resting parts are recognized in the area place any 1 dimension of the location vector of a candidate has been expected in the foremost diagonal within the solution space. For recognizing a resting part of a candidate, the location and solution space of the candidate have been regularized to a span of 0–1. A stochastically selected location’s dimension of the candidate is expected the foremost diagonal of the regularized solution space. The expected location has been regarded as the resting region’s center of the eel. The resting region could be calculated by the next formula:
$$\:\left\{Z\left|Z-X\left(t\right)\right|\le\:{a}_{0}\times\:\left|X-{z}_{prey}\left(t\right)\right|\right\}$$
(17)
$$\:{a}_{0}=2.(e-{e}^{(t/T)})$$
(18)
$$\:X\left(t\right)=L+x\left(t\right)\times\:(U-L)$$
(19)
$$\:x\left\{t\right.=\frac{{z}_{rand\left\{n\right.}^{rand\left\{d\right.}\left\{t-{L}^{rand\left\{d\right.}\right.}{{U}^{rand\left\{d\right.}-{L}^{rand\left\{d\right.}}$$
(20)
where, \(\:{z}_{prey}\) is the location vector of the finest solution gained to date, \(\:{a}_{0}\:\)has been found to be the primary scale of the region of resting, \(\:{a}_{0}\times\:\left|X-{z}_{prey}\left(t\right)\right|\) designates the span of the resting region, \(\:{z}_{rand\left(n\right)}^{rand\left(d\right)}\) is the stochastic location dimension of a stochastically selected member of the existing population, and the regularized amount has been illustrated by \(\:z\). Consequently, the resting situation of a candidate has been gained in its resting region before accomplishment of resting manner:
$$\:{R}_{i}(t\:+\:1)\:=\:X\left(t\right)+\alpha\:\times\:\left|X\left(t\right)-{z}_{prey}\left(t\right)\right|$$
(21)
$$\:\alpha\:\:=\:{a}_{0}\times\:\:sin\left(2\pi\:{r}_{2}\right)$$
(22)
where, \(\:\alpha\:\) illustrates the resting region’s scale, and \(\:{r}_{2}\) illustrates a stochastic amount in (0,1). The scale \(\:\alpha\:\) makes the resting region’s range be able to reduce as the iterations progress. It would improve local search. Once the resting region has been established, candidates would transfer it to the present phase. Namely, a candidate renews its location to the resting region using its resting location within its resting region. The manner of resting is determined subsequently:.
$$\:{v}_{i}(t\:+\:1)={\:R}_{i}(t\:+\:1)+{n}_{2}\times\:({R}_{i}\left(t\:+\:1\right)-round(rand)\times\:{z}_{i}(t\left)\right)$$
(23)
$$\:{n}_{2}\sim\:N\left(\text{0,1}\right)$$
(24)
Hunting
Once eels discover prey, they do not merely make a group to hunt. Alternately, they have a tendency to accommodatingly swim within the creation of a big circle and enclose the quarry. In the meantime, the individuals continually interconnect and collaborate with their colleagues throughout emission of organ discharges with low electricity. As the individuals’ communication strengthens, the size of the electrical circle declines. Lastly, individuals move the school of fish from the low part to the upper part, in the place where they are easy to hunt. On the basis of the present manner, the electrical circle is defined as the space of hunting, at the present point, the quarry commences moving everywhere in the region of hunting; consequently, the quarry would rapidly go from the present location to several locations in the area of hunting because of being scared. The hunting space could be definite as follows:
$$\:\left\{Z\left|Z-{z}_{prey}\left(t\right)\right|\le\:{\beta\:}_{0}\times\:\left|\stackrel{-}{z}\left(t\right)-{z}_{prey}\left(t\right)\right|\right\}$$
(25)
$$\:{\beta\:}_{0}=\:2\:\times\:(e-{e}^{(t/T)})$$
(26)
here, \(\:{\beta\:}_{0}\) defines the primary size of the space of hunting. Based on Eq. (25), a candidate focuses on the quarry \(\:{z}_{prey}\left(t\right)\) by establishing a hunting range given by the term \(\:{\beta\:}_{0}\:\) multiplied by the total value of the distinction between the positions \(\:\stackrel{-}{z}\left(t\right)\) and \(\:{z}_{prey}\left(t\right)\). Consequently, a novel location of the grey regarding its former location in the hunting space could be produced as:
$$\:{H}_{prey}\left(t+1\right)={z}_{prey}\left(t\right)+\beta\:\times\:\left|\stackrel{-}{z}\left(t\right)-{z}_{prey}\left(t\right)\right|$$
(27)
$$\:\beta\:={\beta\:}_{0}\times\:sin\left(2\pi\:{r}_{3}\right)$$
(28)
here, \(\:\beta\:\) defines the size of the hunting space and \(\:{r}_{3}\:\)is a stochastic amount in interval \(\:\left(\text{0,1}\right)\). The scale β causes the series of the space of hunting to reduce with the passage of time. Exploitation is advantageous. Having determined hunting space, a candidate commences to catch quarry within the space of hunting. Once hunting, the individual rapidly detects prey and coils’ the novel location to surround the prey among its head and tail; additionally, this will emit a current that is high-voltage round the prey. The hunting performance experiment in EEFO contains a coiling motion, in which an eel’s location is renewed to the novel prey’s location. The coiling performance displayed by eels throughout hunting could be designated as:
$$\:{v}_{i}(t\:+\:1)={\:H}_{prey}(t\:+\:1)+\eta\:\times\:({H}_{prey}\left(t\:+\:1\right)-round(rand)\times\:{z}_{i}(t\left)\right)$$
(29)
where, η designates the coiling factor, defined as follows:
$$\:\eta\:\:={\:e}^{({r}_{4}\left(1-\:t\right)/T)}\times\:cos\left(2\pi\:{r}_{4}\right)$$
(30)
where, \(\:{r}_{4}\) defines a stochastic amount in interval (0,1). Once the prey is encircled by eels, some locations are produced because the quarry accomplishes a dive that is swan. Then, a candidate surprises the quarry via coiling performance, and a location is utilized for renewing the novel location of the candidate within the subsequent iteration.
Migrating
Once candidates discover quarry, the candidates have a tendency to travel from the space of resting to the space of hunting. The mathematical model of the eels’ migration performance is determined by the next formula:
$$\:{v}_{i}\left(t\:+\:1\right)=-{r}_{5}\times\:{R}_{i}(t\:+\:1)+{r}_{6}\times\:({H}_{r}\left(t\:+\:1\right)-Lf\times\:({H}_{r}\left(t\:+\:1\right)-{z}_{i}\left(t\right))$$
(31)
$$\:{H}_{r}\left(t\:+\:1\right)={z}_{prey}\left(t\right)+\beta\:\times\:\left|\stackrel{-}{z}\left(t\right)-{z}_{prey}\left(t\right)\right|$$
(32)
$$\:Lf=0.01\times\:\left|\frac{u.\delta\:}{{\left|v\right|}^{(1/b)}}\right|$$
(33)
$$\:u,v\sim\:N\left(\text{0,1}\right)$$
(34)
$$\:\delta\:={\left(\frac{{\Gamma\:}(1+b)\times\:\text{s}\text{i}\text{n}\left(\frac{\pi\:b}{2}\right)}{{\Gamma\:}\left(\frac{1+b}{2}\right)\times\:b\times\:{2}^{(b-1/2)}}\right)}^{(1/b)}$$
(35)
here, \(\:{H}_{r}\) is regarded as any location in the hunting space, \(\:{r}_{5}\:\)and \(\:{r}_{6}\:\)are stochastic amounts in interval (0,1). The term \(\:\left({H}_{r}\left(t\:+\:1\right)-{z}_{i}\left(t\right)\right)\:\)designates that individuals transfer to the space of hunting. The function of Levy flight has been defined by \(\:Lf\) and has been presented to the global search stage of EEFO for evading from tricking into local optimum. \(\:Lf\) is calculated as follows:
here, Γ demonstrates the standard Gamma function; in addition, \(\:b\) equals 1.5. An eel could observe the prey’s location by low discharge of electricity; therefore, it could alter its location at any time. In the foraging procedure, if the eel feels that prey is close, it transfers to the candidate location; otherwise, the eels remain at the present location. The eels’ locations are renewed as follows:
$$\:{z}_{i}\left(t+1\right)=\left\{\begin{array}{c}{z}_{i}\left(t\right)\:\:\:\:\:\:\:\:\:\:\:\:\:\:fit\left({z}_{i}\left(t\right)\right)\le\:fit\left({v}_{i}\left(t+1\right)\right)\\\:{v}_{i}\left(t+1\right)\:\:\:\:\:\:fit\left({z}_{i}\left(t\right)\right)>fit\left({v}_{i}\left(t+1\right)\right)\end{array}\:\:\:\:\:\right.$$
(36)
The transition from exploration to exploitation
EEFO utilizes an energy factor to determine search behaviors, which could efficiently manage the shift from exploration to exploitation for enhancing the algorithm’s optimization action. The amount of an energy factor of a candidate has been utilized for being selected from exploitation and exploration. The factor of energy has been calculated by the next formula:
$$\:Es\left(t\right)=4\times\:sin\left(1-\:\frac{t}{T}\right)\times\:ln\frac{1}{{r}_{7}}$$
(37)
Here \(\:{r}_{7}\:\)is a stochastic amount in interval (0,1). Based on Eq. (45), \(\:Es\left(t\right)\) diminishes when the iterations upsurge; furthermore, the factor of energy \(\:Es\left(t\right)\) demonstrates a lessening change by fluctuation throughout the iterations. Whenever the factor of energy \(\:Es\left(t\right)>\)1, individuals accomplish exploration within the total parameter region in a cooperating manner, consequential within global search. Whenever the factor of energy \(\:Es\left(t\right)\le\:1\), eels have a tendency to accomplish exploration in a hopeful sub-region through resting, migrating, or manners of hunting, leading to global search. Within the 1 st half of the iterations, the global search happens using an upper possibility, whilst within the 2nd half of the iterations, the global search happens using a higher possibility. For investing the search manner of EEFO, the possibility of \(\:Es\) that is more than 1 is assessed through the optimization procedure. Let
$$\:\theta\:=1-\frac{t}{T}\:$$
(38)
Then
$$\:Es\left(t\right)\:=\:sin\left(\theta\:\right)ln\frac{1}{{r}_{7}}\:\:$$
(39)
The possibility of \(\:Es>0\) is gained by:
$$\:P\left\{Es\left(t\right)>1\right\}=\frac{{\int\:}_{0}^{1}{\int\:}_{0}^{{e}^{\frac{1}{4sin}}}drd\theta\:}{1}=\:-{\int\:}_{-\propto\:}^{\frac{1}{4\cdot\:sin\left(1\right)}}\frac{{e}^{z}dz}{z\sqrt{16{z}^{2}-1}}\approx\:0.5035$$
(40)
Based on the consequence of Eq. (48), there is a possibility of about 50% to accomplish between exploration or exploitation throughout the optimization procedure. It donates significantly to make a balance between exploration and exploitation.
Modified EEFO
The efficacy of the EEFO algorithm within complicated optimality issues with limited parameters is being enhanced by making modifications to address its susceptibility to parameter settings and lack of search agent variety. Chaotic maps, which are utilized in metaheuristic optimization, provide advantages such as randomness, statistical characteristics, and nonlinearity. These benefits enable deeper exploration and yield improved solutions. Consequently, chaotic maps have become increasingly popular in the field of computer engineering, particularly for generating random numbers and substituting pseudo-random numbers with Gaussian distributions.
The EEFO algorithm, susceptible to parameter settings and lack of search agent variety, is being modified to improve its performance in complex optimization problems with limited parameters. Chaotic maps, used in metaheuristic optimization, offer benefits such as randomness, statistical characteristics, and nonlinearity, allowing for deeper exploration and better answers. They have gained popularity in computer engineering for generating random numbers and replacing pseudo-random numbers with Gaussian distributions.
The chaotic enhanced optimization technique marked the beginning of a lengthy line of advancements in optimization approaches. In the domain (0, 1), the Liebovitch map stood out due to the existence of two separators, which may be mathematically described as follows:
$$\:{\theta\:}^{i}\left(t\right)=\left\{\begin{array}{c}\begin{array}{c}\sigma\:\times\:{\theta\:}^{i}\left(t\right),\:\:\:0<{\theta\:}^{i}\left(t\right)\le\:{P}_{1}\\\:\frac{P-{\theta\:}^{i}}{{P}_{2}-{P}_{1}},{P}_{1}\le\:{\theta\:}^{i}\left(t\right)<{P}_{2}\end{array}\\\:1-\gamma\:\times\:(1-{\theta\:}^{i}\left(t\right),\:{P}_{2}\le\:{\theta\:}^{i}\left(t\right)<1\:\end{array}\right.$$
(41)
Here, all random values, including \(\:{r}_{1},{r}_{2},\dots\:,{r}_{6}\), i.e.,
$$\:{\theta\:}_{j}^{i}={r}_{j|j=\text{1,2},\dots\:,6}$$
(42)
To improve the metaheuristic algorithm’s efficiency and convergence to the best solution, the “elimination phase” is an important step. As part of this procedure, we rank all of the possible solutions so that the algorithm may be zero in the most promising areas of the solution space. By removing the subpar results, the algorithm becomes more efficient and gets closer to the best solution. During the elimination step, we rank all of the solutions by objective function value, and then choose the ones with the lowest scores.
Validation of the algorithm
Objective validation of the proposed updated EEFO algorithm was necessary to demonstrate its reliability. Initially, this approach was used to resolve 23 long-standing, conventional benchmark functions. LFD (Lévy Flight Distribution)28, PRO (Poor and Rich Optimization)29, WSO (War Strategy Optimization)30, EO (Equilibrium Optimizer)31, and Runge kutta optimizer (RUN)32 were five more advanced algorithms that were compared to EEFO’s findings. Table 1 tabulates a comparison of the competing algorithms’ parameter values.
It summarizes a comparison of parameter values for a few optimization techniques of common interest: Lévy Flight Distribution (LFD), Poor and Rich Optimization (PRO), War Strategy Optimization (WSO) and Equilibrium Optimizer (EO), and Runge Kutta Optimizer (RUN). These parameters are used as benchmark for performance with MEEFO algorithm. Each algorithm has its very own collection of parameters, which need to be judiciously set to get the best possible results, as the next comparison shows.
Using multimodal functions from F8–F13, unimodal functions from F1–F7, and fixed-dimensional functions from F14–F23, the study was conducted33. Every function had a dimension of thirty. The primary goal of the research was to find the lowest possible value for all twenty-three of the aforementioned functions. If an algorithm could minimize its output value, it would be the most efficient.
Analyzing the optimization solutions of the algorithms in the solution using the mean value and standard deviation (StD) led to a reliable analysis. To provide a strong and fair comparison, all algorithms were tested under identical conditions, including a fixed population size and an upper limit on the number of iterations. There can be no more than 200 iterations and 60 nodes in this example’s population. By doing this procedure 30 times, we could guarantee more accurate and trustworthy results34. The IMSR algorithm’s performance relative to other approaches is seen in Table 2.
In order to examine the outcomes, we computed the standard deviation (StD) and average of the optimization solutions offered via each method. The results of the study and comparison could be relied upon because of this. By examining the average values, one can see that the suggested EEFO algorithm outperforms the majority of the benchmark functions.
It finds highly optimal solutions for these functions because it constantly gets extremely low mean values. When we compare EEFO to other algorithms, we find that it usually does better than or at least as well as more sophisticated algorithms. A few examples of functions where EEFO outperforms competing techniques are F1, F4, F8, F12, F15, and F16.
While EEFO generally outperforms competing algorithms, there are a few benchmark functions where it falls short. For example, other methods such as LFD, WSO, and EO provide lower average values in functions F2, F3, F6, F7, and F13. When looking at the standard deviation numbers, it’s clear that EEFO’s optimization solutions are usually quite consistent. This means the method is reliable and seldom gives wildly different outcomes when solving problems.
Evolved ZFNet
A method to recognize the music genre, named ZFNet/ELM/MEEFO is provided is in the study. This technique is, in turn, resultant of the ZFNet, ELM, and MEEFO processes. Applying a ZFNet that is trained provides an opportunity for extracting properties of imaging from music genre audio signals.
This part would evaluate the appropriateness of the searching agents produced by MEEFO with the performance index identified as MSE (Mean Square Error) for arrangement aims. The useful fitness function could be defined by the next equation:
$$\:fitnessfunction\:=\left(0.5\right)\:{\left(\sum\:_{i=0}^{T}{(g-\widehat{g})}^{2}/T\right)}^{0.5}$$
(43)
Here \(\:\widehat{g}\:\)and \(\:g\), in turn, signify the assessed and real outputs, and \(\:T\) is defined as the entire number of samples that are utilized for training.
link