Science topics: Computer Science and EngineeringAlgorithms
Science topic
Algorithms - Science topic
Explore the latest questions and answers in Algorithms, and find Algorithms experts.
Questions related to Algorithms
Hi,
Does anyone know a good way to mathematically define/identify the onset of a plateau for a curve y = f(x) in a 2D plane?
A bit more background: I have a set of curves from which I'd like to extract the x values where the "plateau" starts, by applying a consistent definition of plateau onset.
Thanks,
Yifan
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

How to create a system of digital, universal tagging of various kinds of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
How to create a system of digital, universal labelling of different types of works, texts, texts, photos, publications, graphics, videos, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
Two days earlier, in an earlier post, I started a discussion on the question of the necessity of improving the security of the development of artificial intelligence technology and asked the following questions: how should the system of institutional control of the development of advanced artificial intelligence models and algorithms be structured, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee? Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built? Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand? On the other hand, while continuing my reflections on the indispensability of improving the security of the development of artificial intelligence technology, analysing the potential risks of the dynamic and uncontrolled development of this technology, I hereby propose to continue my deliberations on this issue and invite you to participate in a discussion aimed at identifying the key determinants of building an institutional control system for the development of artificial intelligence, including the development of advanced models composed of algorithms similar or more advanced to the ChatGPT 4.0 system developed by the OpenAI company and available on the Internet. It is necessary to normatively regulate a number of issues related to artificial intelligence, both the issue of developing advanced models composed of algorithms that form artificial intelligence systems; posting these technological solutions in open access on the Internet; enabling these systems to carry out the process of self-improvement through automated learning of new content, knowledge, information, abilities, etc.; building an institutional system of control over the development of artificial intelligence technology and current and future applications of this technology in various fields of activity of people, companies, enterprises, institutions, etc. operating in different sectors of the economy. Recently, realistic-looking photos of well-known, highly recognisable people, including politicians, presidents of states in unusual situations, which were created by artificial intelligence, have appeared on the Internet on online social media sites. What has already appeared on the Internet as a kind of 'free creativity' of artificial intelligence, creativity both in terms of the creation of 'fictitious facts' in descriptions of events that never happened, in descriptions created as an answer to a question posed for the ChatGPT system, and in terms of photographs of 'fictitious events', already indicates the potentially enormous scale of disinformation currently developing on the Internet, and this is thanks to the artificial intelligence systems whose products of 'free creativity' find their way onto the Internet. With the help of artificial intelligence, in addition to texts containing descriptions of 'fictitious facts', photographs depicting 'fictitious events', it is also possible to create films depicting 'fictitious events' in cinematic terms. All of these creations of 'free creation' by artificial intelligence can be posted on social media and, in the formula of viral marketing, can spread rapidly on the Internet and can thus be a source of serious disinformation realised potentially on a large scale. Dangerous opportunities have therefore arisen for the use of technology to generate disinformation about, for example, a competitor company, enterprise, institution, organisation or individual. Within the framework of building an institutional control system for the development of artificial intelligence technology, it is necessary to take into account the issue of creating a digital, universal marking system for the various types of works, texts, photos, publications, graphics, films, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification ..., should be different for what is the product of artificial intelligence. It is therefore necessary to create a system of digital, universal labelling of the various types of works, texts, photos, publications, graphics, videos, etc., made by artificial intelligence and not by humans. The only issue for discussion is therefore how this should be done.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to create a system for the digital, universal marking of different types of works, texts, photos, publications, graphics, videos, innovations, patents, etc. made by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
How to create a system of digital, universal labelling of different types of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

Blockchain is a distributed database of immutable records called blocks, which are secured using cryptography. There are a previous hash, transaction details, nonce, and target hash value. Financial institutions were the first to pay notice to it, as it was in simple words a new payment system.
Block is a place in a blockchain where data is stored. In the case of cryptocurrency blockchains, the data stored in a block are transactions. These blocks are chained together by adding the previous block's hash to the next block's header. It keeps the order of the blocks intact and makes the data in the blocks immutable.
A block is like a record of the transaction. Each time a block is verified, it gets recorded in chronological order in the main Blockchain. Once the data is recorded, it cannot be modified.

Hello Everyone, I want to perform division operation in Verilog - HDL. Please suggest me an algorithm for division in which the clock cycle taken by division operation is independent on input. That is for division of any number a (a can be any number) by b(b can be any number),same number of clock cycle wil be taken by division operation for different set of a and b.
🔴Human-GOD coevolution & the Religion-type of the future (technology; genetics; medicine; robotics; informatics; AI Algorithm & Quantum PC;..)🔴
Gentle RG-readers,
according to (possible) technological and informatics evolution of Homo sapiens from 2000 to 2100,
what Religion-type show the best resistance-resilience??
What will be the main Religion-type in the future??
--One GOD.
--Multiple and diverse GOD.
--Absence of Transcendence.
Moreover: GOD will be an evolutionary step of Human??
In this link (chapter pag.40) there is a quasi-fantasy scenario ...
Book VanGELO Assoluto.
•
•
•
● Other papers of this series ●
Technical Report Vagiti di DIO. GOD Wailing.
Technical Report Tetanic Crystal. Hypotheses on the state of aggregation of m...
Technical Report UFO: dati statistici a supporto della teoria della evoluzion...
🟥

How should a system of institutional control over the development of advanced artificial intelligence models and algorithms be built so that this development does not get out of hand and lead to negative consequences that are currently difficult to predict?
Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built?
Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand?
To this, should the development of artificial intelligence be under control? - the answer is probably obvious, i.e. that it should. What remains debatable, however, is how the system of institutional control of the development of advanced artificial intelligence models and algorithms should be structured so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee. Besides, if the question: should the development of artificial intelligence be controlled? - is answered in the affirmative, i.e. YES, then who should exercise this control? So, how should an institutional system of control over the development of advanced artificial intelligence models and algorithms and their applications be constructed, so that the potential and real future negative effects of dynamic and not fully controlled technological progress do not outweigh the positive ones. Well, at the end of March this year 2023, a number of new technology developers, artificial intelligence experts, besides businessmen, investors developing technology start-ups, including, among others, Apple co-founder Steve Wozniak and the founder or co-founder of such technology companies as PayPal, SpaceX, Tesla, Neuralink and the Boring Company, i.e. Elon Musk, Stability AI chief Emad Mostaque (maker of the Stable Diffusion image generator) and artificial intelligence researchers from Stanford University, Massachusetts Institute of Technology (MIT) and other AI universities and labs have called in a joint letter for at least a six-month pause in the development of artificial intelligence systems more capable than the GPT-4 published in March. The aforementioned letter acting as a kind of cautionary petition was published on the Future of Life Institute centre's website, advanced artificial intelligence could represent "a profound change in the history of life on Earth" and the development of this technology should be approached with caution. In this petition of sorts, there are warnings about the unpredictable consequences of the race to create ever more powerful models and complex algorithms that are key components of artificial intelligence technology. The aforementioned developers of leading technology companies suggest that the development of artificial intelligence should be slowed down temporarily, as the risk has now emerged that this development could slip out of human control. The aforementioned petition warns that an uncontrolled approach to AI development risks a deluge of disinformation, mass automation of work and even the replacement of humans by machines and a 'loss of control over civilisation'. In addition, the letter suggests that if the current rapid development of artificial intelligence algorithm systems gets out of hand, then the scale of disinformation on the Internet will increase significantly, the process of work automation already taking place will accelerate many times, which may lead to the loss of jobs for about 300 million people within the current decade and, as a consequence, may also lead to a kind of loss of human control over the development of civilisation. Developers of new technologies point out that advanced artificial intelligence algorithm systems should only be developed when the development of artificial intelligence is under full control, the effects of this development are positive and the potential risks are fully controllable. Developers of new technologies are calling for a temporary pause in the training of systems superior to OpenAI's recently released GPT-4 system, which, among other things, is capable of passing tests of various kinds at a level close to the best results passed by humans. The aforementioned letter also calls for the implementation of comprehensive government regulation and oversight of new models of advanced AI algorithms, so that the development of this technology does not overtake the creation of the necessary legal regulations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why do the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc., now call for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology takes place fully under control and does not get out of hand?
Should the development of artificial intelligence be controlled? And if so, who should exercise this control? How should an institutional control system for the development of artificial intelligence applications be built?
How should a system of institutional control of the development of advanced artificial intelligence models and algorithms be built, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee?
What do you think?
What is your opinion on the subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz

(say for example 200)
LMS algorithm for Adaptive linear prediction.
Dear colleagues,
I was wondering whether there are any ways or any softwares that I can use to analysis the muscle cross sectional area in my H&E histology images?
I tried to use ImageJ thresholding, unfortunately it does not work efficiently for me. Thus, I was wondering whether there are any currently established methods.
Thank you very much in advance.
I have seen the implementation of L-BFGS-B by authors in Fortran and ports in several languages. I am trying to implement the algorithm on my own.
I am having difficulty grasping a few steps. Is there a worked out example using L-BFGS or L-BFGS-B ? Something similar to (attached link) explaining the output of each step in an iteration for a simple problem.
I am analysing the data collected by Questionnaire survey, which consists socio demographic as well as likert scale based questions related to satisfaction with public transport. I am developing predictive model to predict Public perceptions to use the public Transport based on their socio demographic and satisfaction level.
I could not found any related reference to CITE. Therefore, I wanna make sure that my study direction is in right direction.
Kindly suggest which routing algorithm is better to implement for finding the optimal route in wireless adhoc networks?
Performance criteria :end to end delay , packet delivery ratio, throughput
I want to understand C5.0 algorithm for data classification , is there any one have the steps for it or the original paper that this is algorithm was presented in ?
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
I am trying to find the best case time complexity for matrix multiplication.
Its going to be a huge shift for marketers, tracking identity is tricky at the best of times with online/offline and multiple channels of engagement - but when the current methods of targeting, measurement and attribution get disrupted, its going to be extremely difficult to get identity right to deliver exceptional customer experiences whilst getting compliance right.
We have put our framework and initial results show promising measurement techniques including Advanced Neo-classical fusion models (borrowed from Financial industry, Biochemical Stochastic & Deterministic frameworks) and applied Bayesian and Space models to run the optimisations. Initial results are looking very good and happy to share our wider thinking thru this work with everyone.
Link to our framework:
Please suggest how would you be handling this environmental change and suggest methods to measure digital landscape going forward.
#datascience #analytics #machinelearning #artificialintelligence #reinforcementlearning #cookieless #measurementsolutions #digital #digitaltransfromation #algorithms #econometrics #MMM #AI #mediastrategy #marketinganalytics #retargeting #audiencetargeting #cmo
Apart from Ant Colony Optimization, Can anyone suggest any other Swarm based method for Edge Detection of Imagery?
Hi ALL,
I want to use a filter for extraction of only ground points from the airborne LiDAR point cloud. Point clouds are for urban areas only. Which filter or algorithm or filter or software is considered to be the best for this purpose. Thanks
This is related to Homomorphic encryption. These three algorithms are used in additive and multiplicative homomorhism. RSA and El gamal is multiplicative and Pallier is additive.Now i want to know what is the time complexity of these algorithms.
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....
Since the early 90’s, metaheuristic algorithms have been continually improved in order to solve a wider class of optimization problems. To do so, different techniques such as hybridized algorithms have been introduced in the literature. I would be appreciate if someone can help me to find some of the most important techniques used in these algorithms.
- Hybridization
- Orthogonal learning
- Algorithms with dynamic population
May I ask what is the current state-of-the-art for incorporating FEM with Machine Learning algorithm? My main questions, to be specific, are:
- Is it a sensible thing to do? Is there an actual need for this?
- What would be the main challenges?
- What have people tried in the past?
There are PIDs but usually only the Proportional part of the PID algorithm is usually used
Mapping systems, as used in diesel engines
But make a several layer PIDs is difficult.
Map based systems (as example used in turbines or diesel engines) needs a lot of testing and works usually with new machines in controlled conditions
It would be better using an algorithm that adapt and slow increases or decreases control signal in order to obtain maximum performance.
Also some algorithm should advise of modifications out of expected values to advise about problems, making an efficient diagnosys of the system
I should need to use this kind of algorithm to control my simulations to reduce number of simulations but also to control my Miranda and Fusion Reactors
Perhaps some of the algorithms can be: Neural Networks, MultiLayer Perceptrons (MLP) and Radial Basis Function (RBF) networks. Also the new Support Vector Regression (SVR)

I would be grateful for suggestions to solve the following problem.
The task is to fit a mechanistically-motivated nonlinear mathematical model (4-6 parameters, depending on version of assumptions used in the model) to a relatively small and noisy data set (35 observations, some likely to be outliers) with a continuous numerical response variable. The model formula contains integrals that cannot be solved analytically, only numerically. My questions are:
1. What optimization algorithms (probably with stochasticity) would be useful in this case to estimate the parameters?
2. Are there reasonable options for the function to be optimized except sum of squared errors?
in pso algorithm for solving an issue with chenging the sequents of element of representative solution ,all elements will chenge and because of elemants dependence to each other, this algorithm didn't go to a
converge optimized answer
how can we solve this problem and converge the algorithm?
-----
representative solution Included two parts.
in first part Included permutation of integers that coded continuous
between 0 and 1
and in fitness function will be decoded to integers
and second part included continuous number between 0 and 1
direction of second part is depended to values of the first part
----
there is no problem with duplicate answer in permutation because of considering fixer procedure
For Example, I have a South Carolina map comprising of 5833 grid points as shown below in the picture. How do I interpolate to get data for the unsampled points which are not present in 5833 points but within the South Carolina(red region in the picture) region? Which interpolation technique is best for a South Carolina region of 5833 grid points?

In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
I want to learn more about the time complexity and big-O notation of the algorithm
What are the trusted books and resources I can learn from?
In an AEC configuration as shown in the Fig.1, let us consider u(n), the far-end speech signal and x(n), the near-end speech signal. The desired signal is d(n)=v(n)+x(n), where v(n) is the echo signal generated from the echo path impulse response. The purpose of an adaptive filter W is to find an echo estimate, y(n) which is then subtracted from the desired signal, d(n) to obtain e(n).
When I am implementing the APA adaptive algorithm for echo cancellation, I am observing leakage phenomenon which is explained as follows:
y(n) will contain a component proportional to x(n), that will be subtracted from the total desired signal. This phenomenon is in fact a leakage of the x(n) in y(n), through the error signal
e(n); the result consists of an undesired attenuation of the near-end signal.
Because of this near-end leakage phenomenon, there is near-end signal suppression in the post-processing output.
I am handling the double-talk using robust algorithms without the need for DTD.
I kindly request to suggest me how to avoid near-end signal leakage in the adaptive filter output y(n)?

Can anyone suggest a data compression algorithm to compress and regenerate data from sensors(eg. accelerometer-it consists of time & acceleration) that are used to obtain structural vibration response?I have tried using PCA but i am unable to regenerate my data.Kindly suggest a suitable method or some other algorithm to go with PCA?
What will be a suitable model for solving the regression problem? Is there any hybrid algorithm or new model/framework exist to solve this problem in deep learning. How much deep learning is promising for regression tasks.
I need implement the epsilon constraint method for solve multi objective optimization problems, but I don´t know how to choose each epsilon interval and neither when to terminate the algorithm, that is to say the stopping criteria.
It seems that the quadprog function of MATLAB, the (conventional) interior-point algorithm, is not fully exploiting the sparsity and structure of the sparse QP formulation based on my results.
In Model Predictive Control, the computational complexity should scale linearly with the prediction horizon N. However, results show that the complexity scales quadratically with the prediction horizon N.
What can be possible explanations?
When doing machine learning, do we normally use several algorithms for comparison? For example, if the RMSE of SVM is 0.1, how do I come up with the conclusion that this model performed well? Just based on saying RMSE value is low, so the result is good? But if there is no comparison, how do I say it is low?
Or shall I include other algorithms e.g. random forest etc to do a comparison of the value? I intended to use only SVM regression, but now I am a bit stuck at the interpretation of the results.. Thank you in advance!
Dear all,
I have a watershed and wish to randomly split the watershed into different 'artificial' farms, and the farm area should follow an exponential distribution as found empirically from other studies. The 'artificial' farm could be rectangular or any other shape.
Is there any way to do this in GIS or other software? Any method achieved through shapefile or raster can be accepted.
Thank you!
Pan
Hell, everyone. I am a student of electrical engineering and my research field is related to the optimization of a power system.
I know that the algorithm that we should choose depends on our problem but there are lots of heuristics, metaheuristic algorithms available to choose from. It will also take some time to understand a specific algorithm and after that maybe we came to know that the chosen algorithm was not the best for my problem. So as per my problem how can I choose the best algorithm?
Is there any simple solution available that can save my time as well?
Thank you for your precious time.
i want to execute apriori algorithm of association rule in datamining through MATLAB
I'm trying to implement a fall detection algortihm written in C in a Zybo board.
I am using Vivado HLS.
I don't know how to start even if already did the tutorials related to Zynq7000.
Thank you for any help.
I am currently working on a binary classification of EEG recordings and I came across with CSP.
As far as I understand, CSP allows you to choose the best features by maximizing the variance between two classes, which is perfect for what I'm doing. Here follow the details and questions:
- I have N trials per subject, from which half belongs to class A and the other half to class B.
- Let's say I want to apply CSP to this subject trials. From what I understood, I should apply CSP to all my trials (please correct me if I'm wrong here). Do I arbitrarily choose which trial from class A to compare with one from class B? Is the order by which I do it, indifferent?
- After CSP I should get the projection matrix (commonly wrote at W), from which I can obtain the transformed signal and compute the variances (part of which will be my features). Why does the computation of the variance is transformed into a log function in most papers?
Thank you very much
I am working on optimizing well placement in the condensate reservoir model using an algorithm. Any kind of code example will be appreciated.
Hello,
I am working with a convex hull in n-dimensions (n>3) and I am having problems generating points on the convex hull surface. Ideally, I would like the points to be uniformly distributed or almost uniformly distributed. I am mostly looking for something simple to understand and implement.
(I am using scipy.spatial.ConvexHull python library)
Any help would be greatly appreciated :)
edit: Thank you very much for the answers already given.:) I have reformulated the question hoping to remove any confusion.
Thanks,
Noemie
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
public static List encrypt(String policy, int secLevel, String type,
byte[] data, int ITERS){
double results[] = new double[ITERS];
DETABECipher cipher = new DETABECipher();
long startTime, endTime;
List list = null;
for (int i = 0; i < ITERS; i++){
startTime = System.nanoTime();
list = cipher.encrypt(data, secLevel,type, policy);
endTime = System.nanoTime();
results[i] = (double)(endTime - startTime)/1000000000.0;
}
return list;
}
public List encrypt(byte abyte0[], int i, String s, String s1)
{
AccessTree accesstree = new AccessTree(s1);
if(!accesstree.isValid())
{
System.exit(0);
}
PublicKey publickey = new PublicKey(i, s);
if(publickey == null)
{
System.exit(0);
}
AESCipher.genSymmetricKey(i);
timing[0] = AESCipher.timing[0];
if(AESCipher.key == null)
{
System.exit(0);
}
byte abyte1[] = AESCipher.encrypt(abyte0);
ABECiphertext abeciphertext = ABECipher.encrypt(publickey, AESCipher.key, accesstree);
timing[1] = AESCipher.timing[1];
timing[2] = ABECipher.timing[3] + ABECipher.timing[4] + ABECipher.timing[5];
long l = System.nanoTime();
LinkedList linkedlist = new LinkedList();
linkedlist.add(abyte1);
linkedlist.add(AESCipher.iv);
linkedlist.add(abeciphertext.toBytes());
linkedlist.add(new Integer(i));
linkedlist.add(s);
long l1 = System.nanoTime();
timing[3] = (double)(l1 - l) / 1000000000D;
return linkedlist;
}
public static byte[] encrypt(byte[] paramArrayOfByte)
{
if (key == null) {
return null;
}
byte[] arrayOfByte = null;
try
{
long l1 = System.nanoTime();
cipher.init(1, skey);
arrayOfByte = cipher.doFinal(paramArrayOfByte);
long l2 = System.nanoTime();
timing[1] = ((l2 - l1) / 1.0E9D);
iv = cipher.getIV();
}
catch (Exception localException)
{
System.out.println("AES MODULE: EXCEPTION");
localException.printStackTrace();
System.out.println("---------------------------");
}
return arrayOfByte;
}
public static ABECiphertext encrypt(PublicKey paramPublicKey, byte[]
paramArrayOfByte, AccessTree paramAccessTree)
{
Pairing localPairing = paramPublicKey.e;
Element localElement1 = localPairing.getGT().newElement();
long l1 = System.nanoTime();
localElement1.setFromBytes(paramArrayOfByte);
long l2 = System.nanoTime();
timing[3] = ((l2 - l1) / 1.0E9D);
l1 = System.nanoTime();
Element localElement2 = localPairing.getZr().newElement().setToRandom();
Element localElement3 = localPairing.getGT().newElement();
localElement3 = paramPublicKey.g_hat_alpha.duplicate();
localElement3.powZn(localElement2);
localElement3.mul(localElement1);
Element localElement4 = localPairing.getG1().newElement();
localElement4 = paramPublicKey.h.duplicate();
localElement4.powZn(localElement2);
l2 = System.nanoTime();
timing[4] = ((l2 - l1) / 1.0E9D);
ABECiphertext localABECiphertext = new ABECiphertext(localElement4, localElement3, paramAccessTree);
ShamirDistributionThreaded localShamirDistributionThreaded = new ShamirDistributionThreaded();
localShamirDistributionThreaded.execute(paramAccessTree, localElement2, localABECiphertext, paramPublicKey);
timing[5] = ShamirDistributionThreaded.timing;
return localABECiphertext;
}
}
public ABECiphertext(Element element, Element element1, AccessTree
accesstree)
{
c = element;
cp = element1;
cipherStructure = new HashMap();
tree = accesstree;
}
public void execute(AccessTree accesstree, Element element,
ABECiphertext abeciphertext, PublicKey publickey)
{
pairing = publickey.e;
ct = abeciphertext;
PK = publickey;
countDownLatch = new
CountDownLatch(accesstree.numAtributes);
timing = 0.0D;
double d = System.nanoTime();
Thread thread = new Thread(new Distribute(abeciphertext,
accesstree.root, element));
thread.start();
try
{
countDownLatch.await();
long l = System.nanoTime();
timing = ((double)l - d) / 1000000000D;
synchronized(mutex)
{
}
}
catch(Exception exception)
{
exception.printStackTrace();
}
}
In the ε-constraint method, one objective will be used as the objective function, and the remaining objectives will be used as constraints using the epsilon value as the bound. In this case:
- Do we need to apply penalty method to handle the constraint?
- How to select the best solution?
- How to get the final Pareto set?
Dear colleagues ,
If you consider a "complete dense" multivariate polynomial, Is there exists a Horner factorization scheme like for a classical polynomial ?
(by "complete-dense", I mean all the possible monom up to a given global order, the number of monom being given by the known formula with combination C_r^(n+r) if am correct).
Thx for answers
Intuitively, Maximum Likelihood inference on high frequency data should be slow, because of the large data set size. I was wondering if anyone has experience with slow inference, I can make optimization algorithms to speed up the infrence then.
I tried this with Yacine Ait Sahalia work on estimating diffusion models, using his code, which (Unfortunately!) is pretty fast, even for large data set. Now does any one know any large slow high frequency financial econometric problem do let me know,
i want to study RPL in mWSN i am using NS2.35 Wsnet simulators, and cooja
can i find some algoritms source code in improving RPL?
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...
I would be grateful if you could reply by referring to valid scientific literature sources.
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
Dijkstra's algorithm performs the best sequentially on a single CPU core. Bellman-Ford implementations and variants running on the GPU outperform this sequential Dijkstra case, as well as parallel Delta-Stepping implementations on multicores, by several orders of magnitude for most graphs. However, there exist graphs (such as road-networks) that perform well only when Dijkstra's algorithm is used. Therefore, which implementation and algorithm should be used for generic cases?
Hi, I have little experience with Genetic algorithm previously.
Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.
But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.
Thanks in advance.
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
As an experiment, I created a small website, which searches for Wikipedia entries that are associated with a certain term. The results are only slightly related and should offer serendipitous encounters.
Feel free to try it and comment your thoughts on it! I'm happy for any feedback.
Thank you
Michael
Intel's SGX extensions create isolated application enclaves, which disallow information leakage and unverified access to private data. However, SGX is now known to be broken as some works have leaked data on real hardware. What do such works exploit to break SGX's security invariants?
Dijkstra's algorithms performs well sequentially. However, applications require even better parallel performance because of real-time constraints. Implementations such as SprayList and Relaxed Queues allow parallelism on priority queue operations in Dijkstra's algorithm, with various performance vs accuracy tradeoffs. Which of these algorithms is the best in terms of raw parallel performance?
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
I need The Goldstein 2D branch cut algorithm in MATLAB. Does any one have a working version ? I need to compare it with another algorithm.
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
I was wondering wether any of you are familiar with some sort of an irrational algorithm. Does this exist? Let me know if you know something in this field, or what you think about it, anything helps!
Thank you
Michael
Synchronization and memory costs are becoming humongous bottlenecks in today's architectures. However, algorithm complexities assume these operations as constant, which are done in O(1) time. What are your opinions in this regard? Are these good assumptions in today's world? Which algorithm complexity models assume higher costs for synchronization and memory operations?
Current parallel BFS algorithms are known to have reduced time complexity. However, such cases do not take into account synchronization costs which increase exponentially with the core count. Such synchronization costs stem from communication costs due to data movement between cores, and coherence traffic if using a cache coherent multicore. What is the best parallel BFS algorithm available in this case?
Graph algorithms such as BFS and SSSP (Bellman-Ford or Dijkstra's algorithm) generally exhibit a lack of locality. A vertex at the start of the graph may want to update an edge that exists in a farther part of the graph. This is a problem in graphs whose memory requirements far exceed those available in the machine's DRAM. How must the graph be streamed into the machine in this case? What are the consequences for a parallel multicore in such cases where access latency and core utilization are of utmost importance?
Or is it just an effective name to call adaptive and self-learning programmed algorithms?
In previous versions of opencv , there was an option to extract specific number of keypoints according to desire like
kp, desc = cv2.sift(150).detectAndCompute(gray_img, None)
But as in opencv 3.1 SIFT and other "non free" algorithms are moved to xfeatures2d ,so the function is giving error . Kindly tell me how can i set limit on the number of keypoints to be extracted using opencv 3.1. Thanks !
I am looking for a public dataset for e-learning that I can use for testing performance and accuracy of Reccomender Systems algorithms. Anyone with an idea where I can find a public dataset?
I have been trying to implement BSA in Python and looks like the algorithm is pretty confusing. Has anyone implemented this algorithm? Any language.
The paper can be found here -> http://users.elis.ugent.be/~bschrauw/publicaties/ijcnn.pdf
Thanks.
Regards,
Akshay.
Update:
I have implemented the BSA algorithm here -> https://github.com/akshaybabloo/Spikes. If anyone needs it, please feel free to fork it.
Some workloads or even inputs perform well on GPUs, while others perform well on multicores. How do we decide which machine to buy for a generic problem base for optimal performance? Cost is NOT taken as a factor here.
I want to make a small software using C++, the input of this software will be source code written in text file. I want to identify the loops and the if statements in the source code and do some operation on them. the output will be the same source code with additional comments. My question is there any libraries in C++ that can help me identify these places and deal with them as a block?
Thank you