Science topic

Algorithms - Science topic

Explore the latest questions and answers in Algorithms, and find Algorithms experts.
Questions related to Algorithms
  • asked a question related to Algorithms
Question
4 answers
Self improvement in AI is about performance not capabilities because the algorithm, in the form of instructions, cannot improve itself without criteria to do so. Such criteria come based on variations in the input and output of the system with the goal set by the algorithm to achieve better results. In your opinion how do you define self improvement in AI?
Relevant answer
Answer
Self-improvement, where an agent autonomously improves its own functioning, has intrigued the AI community for several decades. We believe that building robust agent-based systems that are scalable and maintainable requires that agents can autonomously adapt and improve..
A fundamental characteristic of an AI agent is its intelligence in taking actions, checking outcomes against those actions, and improving itself if those actions result in divergence from its goals. Self-improvement, where an agent autonomously improves its own functioning, has intrigued the AI community for several decades..
Regards,
Shafagat
  • asked a question related to Algorithms
Question
7 answers
  1. Is it possible to create a special optimization algorithm to improve energy efficiency?
2. Is it possible to develop an algorithm to become a hybrid algorithm based on energy efficiency parameters?
Dear respected researchers
I am grateful to you for taking the time to read this question and I hope to see your suggestions and answers about this question.
My greetings and respect to you
Relevant answer
Thanks for the answer
  • asked a question related to Algorithms
Question
4 answers
会议征稿:【中国算力大会主办】2024算法、高性能计算与人工智能国际学术会议(AHPCAI 2024)定于2024年6月21-23日在中国郑州举行。会议主要围绕算法、高性能计算与人工智能等研究领域展开讨论。
Call for papers-2024 International Conference on Algorithms, High Performance Computing and Artificial Intelligence
重要信息
大会官网(投稿网址):https://ais.cn/u/AvAzUf​
会议时间:2024年6月21-23日
会议地点:中国·郑州
接受/拒稿通知:投稿后1周左右
收录检索:EI Compendex,Scopus
征稿主题
1、算法(算法分析/近似算法/可计算性理论/进化算法/遗传算法/数值分析/在线算法/量子算法/随机化算法/排序算法/算法图论和组合/计算几何/计算技术和应等)
2、人工智能(自然语言处理/知识表现/智能搜索/机器学习/感知问题/模式识别/逻辑程序设计软计算/不精确和不确定的管理/人工生命/神经网络/复杂系统/遗传算法/计算机视觉等)
3、高性能计算(网络计算技术/高性能计算软件与工具开发/计算机系统评测技术/云计算系统/移动计算系统/点对点计算/网格和集群计算/服务和互联网计算/效用计算等)
4、图像处理 (图像识别/图片检测网络/机器人视觉/聚类/图像数字化/图像增强和复原/图像数据编码/图像分割/模拟图像处理/数字图像处理/图像描述/光学图像处理/数字信号处理/图像传输等)
5、其他相关主题
论文出版
1、 本会议投稿经过2-3位组委会专家严格审核之后,最终所录用的论文将由SPIE - The International Society for Optical Engineering (ISSN: 0277-786X)出版,出版后提交EI Compendex, Scopus检索。
*AHPCAI前三届已完成EI检索!稳定且快速!
2、本届会议作为2024算力大会分会进行公开征稿,高质量论文将遴选至2024算力大会,由IEEE出版,见刊后由期刊社提交至 EI Compendex和Scopus检索。
参会方式
1、作者参会:一篇录用文章允许一名作者免费参会;
2、主讲嘉宾:申请主题演讲,由组委会审核;
3、口头演讲:申请口头报告,时间为10分钟;
4、海报展示:申请海报展示,A1尺寸,彩色打印;
5、听众参会:不投稿仅参会,也可申请演讲及展示。
6、报名参会:https://ais.cn/u/AvAzUf
Relevant answer
Answer
Wishing you every success, International Journal of Complexity in Applied Science and Technology
  • asked a question related to Algorithms
Question
2 answers
2024 5th International Conference on Computer Vision and Data Mining(ICCVDM 2024) will be held on July 19-21, 2024 in Changchun, China.
Conference Webiste: https://ais.cn/u/ai6bQr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Computer Science and Engineering
......
◕ Vision Science and Engineering
· Image/video analysis
· Feature extraction, grouping and division
· Scene analysis
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Robotics Science and Engineering
Image/video analysis
Feature extraction, grouping and division
Scene analysis
......
All accepted papers will be published by SPIE - The International Society for Optical Engineering (ISSN: 0277-786X), and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: June 19, 2024
Registration Deadline: June 30, 2024
Final Paper Submission Date: June 30, 2024
Conference Dates: July 19-21, 2024
For More Details please visit:
Relevant answer
Answer
Thanks for sharing. Wishing you every success in your task.
  • asked a question related to Algorithms
Question
4 answers
Machine learning algorithms can be less accurate and powerful when working with small datasets because their ability to recognize patterns is generally proportional to the dataset's size. How much data is enough for Machine learning? What are the ways to use Machine learning when we have very limited data set?
Relevant answer
Answer
The more complicated data you have the more data you need.
Also, if we talk about neural networks, the more neurons you have the less data you might need, that is kinda inverse behaviour.
So we cannot name amount in numbers, it is highly related to the problem you are trying to solve. If you know that data is linear, even 2 values might be enough. If data is highly non-linear, sometimes even 2 000 data points are not enough. If we are working with images, you might even need 50 000 or more. Also, some machine learning methods work better with low amount of data, others - with large datasets. In general, the more data, the better model we can make.
  • asked a question related to Algorithms
Question
2 answers
2024 4th International Conference on Computer, Remote Sensing and Aerospace (CRSA 2024) will be held at Osaka, Japan on July 5-7, 2024.
Conference Webiste: https://ais.cn/u/MJVjiu
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Algorithms
Image Processing
Data processing
Data Mining
Computer Vision
Computer Aided Design
......
2. Remote Sensing
Optical Remote Sensing
Microwave Remote Sensing
Remote Sensing Information Engineering
Geographic Information System
Global Navigation Satellite System
......
3. Aeroacoustics
Aeroelasticity and structural dynamics
Aerothermodynamics
Airworthiness
Autonomy
Mechanisms
......
All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 31, 2024
Registration Deadline: May 31, 2024
Conference Date: July 5-7, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Dear Kazi Redwan ,Regular Registration(4 - 6 pages) fee is 485 USD. Online presentation is accepted. All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
For More Details about registration please visithttp://www.iccrsa.org/registration_all
For Paper submission: https://ais.cn/u/MJVjiu
  • asked a question related to Algorithms
Question
1 answer
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Please let me know if anyone is interested to o
  • asked a question related to Algorithms
Question
2 answers
Understanding the behavior of the Delaunay triangulation algorithm with collinear points is crucial for assessing its robustness and precision in computational geometry applications.
Relevant answer
Answer
During Delaunay triangulation, if a triangle is created from three (almost) collinear points, its big circumcircle will probably cover another point in the point set, which will in turn violate the Delaunay property. If such a case happens, the algorithm will perform a diagonal edge flip with an adjacent triangle, forming two new triangles, after which the Delaunay property should be satisfied. The resulting triangulation is the one that has very few "sliver triangles", which is desirable in graphic rendering, localization, etc.
  • asked a question related to Algorithms
Question
1 answer
Explore the synergistic impact of machine learning on improving the precision of predicting protein structures in bioinformatics. Seeking insights into the specific methodologies and advancements that contribute to enhanced accuracy.
Relevant answer
Answer
Machine learning algorithms have been shown to enhance the accuracy of protein structure prediction in bioinformatics. Traditional methods for protein structure prediction rely on energy minimization and molecular dynamics simulations, which can be computationally expensive and time-consuming. Machine learning algorithms can be used to predict protein structure more efficiently and accurately by learning from large datasets of known protein structures and their corresponding sequences
Machine learning algorithms can be used to predict protein structure by analyzing the relationships between amino acid sequences and protein structures. These algorithms can identify patterns in the data and use them to predict the structure of unknown proteins. Machine learning algorithms can also be used to predict the stability of protein structures and to identify potential drug targets
  • asked a question related to Algorithms
Question
2 answers
Seeking insights on the comparative time complexities of Dijkstra's and Prim's algorithms in the context of finding minimum spanning trees, aiming to understand their efficiency trade-offs in graph analysis.
Relevant answer
Answer
Hello S M. Dijkstra's algorithm is used to find the shortest distance between two vertices in a weighted graph, rather than finding a minimum spanning tree. I believe that you might be thinking of Kruskal's algorithm. The complexity of Kruskal's algorithm is O(m^2) (where m is the size) and the complexity of Prim's algorithm is O(n^3). So if you have a sparse graph, Kruskal's algorithm will be preferable, but for dense graphs Prim's algorithm will be better.
  • asked a question related to Algorithms
Question
1 answer
Seeking insights for algorithmic optimization.
Relevant answer
Answer
As a starting point, what did Google tell you?
  • asked a question related to Algorithms
Question
1 answer
Hi,
Does anyone know a good way to mathematically define/identify the onset of a plateau for a curve y = f(x) in a 2D plane?
A bit more background: I have a set of curves from which I'd like to extract the x values where the "plateau" starts, by applying a consistent definition of plateau onset.
Thanks,
Yifan
Relevant answer
Answer
Hi!
I would propose to look for points where the derivative is close to zero.
You can identify values of x where the absolute value of the derivative is less than a small threshold value ε (this should have a small value like 1e-5 so that you can capture values near to zero).
The plateau onset can be determined by finding the minimum and maximum x-values where the derivative is close to zero. This will give you the range of x-values where the plateau starts.
  • asked a question related to Algorithms
Question
3 answers
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
If tools such as ChatGPT, after the necessary update and adaptation to current Internet technologies, are combined with search engines developed by Internet technology companies, search results can be shaped by certain complex algorithms, by generative artificial intelligence learned to use and improve complex models for advanced intelligent search of precisely defined topics, intelligent search systems based on artificial neural networks and deep learning. If such solutions are created, it may involve the risk of deliberate shaping of algorithms of advanced Internet search systems, which may generate the risk of interference and influence of Internet search engine technology companies on search results and thus shaping the general social awareness of citizens on specific topics.
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
  • asked a question related to Algorithms
Question
3 answers
How to create a system of digital, universal tagging of various kinds of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
How to create a system of digital, universal labelling of different types of works, texts, texts, photos, publications, graphics, videos, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
Two days earlier, in an earlier post, I started a discussion on the question of the necessity of improving the security of the development of artificial intelligence technology and asked the following questions: how should the system of institutional control of the development of advanced artificial intelligence models and algorithms be structured, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee? Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built? Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand? On the other hand, while continuing my reflections on the indispensability of improving the security of the development of artificial intelligence technology, analysing the potential risks of the dynamic and uncontrolled development of this technology, I hereby propose to continue my deliberations on this issue and invite you to participate in a discussion aimed at identifying the key determinants of building an institutional control system for the development of artificial intelligence, including the development of advanced models composed of algorithms similar or more advanced to the ChatGPT 4.0 system developed by the OpenAI company and available on the Internet. It is necessary to normatively regulate a number of issues related to artificial intelligence, both the issue of developing advanced models composed of algorithms that form artificial intelligence systems; posting these technological solutions in open access on the Internet; enabling these systems to carry out the process of self-improvement through automated learning of new content, knowledge, information, abilities, etc.; building an institutional system of control over the development of artificial intelligence technology and current and future applications of this technology in various fields of activity of people, companies, enterprises, institutions, etc. operating in different sectors of the economy. Recently, realistic-looking photos of well-known, highly recognisable people, including politicians, presidents of states in unusual situations, which were created by artificial intelligence, have appeared on the Internet on online social media sites. What has already appeared on the Internet as a kind of 'free creativity' of artificial intelligence, creativity both in terms of the creation of 'fictitious facts' in descriptions of events that never happened, in descriptions created as an answer to a question posed for the ChatGPT system, and in terms of photographs of 'fictitious events', already indicates the potentially enormous scale of disinformation currently developing on the Internet, and this is thanks to the artificial intelligence systems whose products of 'free creativity' find their way onto the Internet. With the help of artificial intelligence, in addition to texts containing descriptions of 'fictitious facts', photographs depicting 'fictitious events', it is also possible to create films depicting 'fictitious events' in cinematic terms. All of these creations of 'free creation' by artificial intelligence can be posted on social media and, in the formula of viral marketing, can spread rapidly on the Internet and can thus be a source of serious disinformation realised potentially on a large scale. Dangerous opportunities have therefore arisen for the use of technology to generate disinformation about, for example, a competitor company, enterprise, institution, organisation or individual. Within the framework of building an institutional control system for the development of artificial intelligence technology, it is necessary to take into account the issue of creating a digital, universal marking system for the various types of works, texts, photos, publications, graphics, films, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification ..., should be different for what is the product of artificial intelligence. It is therefore necessary to create a system of digital, universal labelling of the various types of works, texts, photos, publications, graphics, videos, etc., made by artificial intelligence and not by humans. The only issue for discussion is therefore how this should be done.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to create a system for the digital, universal marking of different types of works, texts, photos, publications, graphics, videos, innovations, patents, etc. made by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
How to create a system of digital, universal labelling of different types of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
Some technological companies offering various Internet services have already announced the creation of a system for digital marking of creations, works, works, studies, etc. created by artificial intelligence. Probably, these companies have already noticed that in this field it is possible to create certain standards for digital marking of creations, works, elaborations, etc. created by artificial intelligence, and this can be another factor of competition and market advantage.
And what is your opinion about it?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
  • asked a question related to Algorithms
Question
7 answers
Blockchain is a distributed database of immutable records called blocks, which are secured using cryptography. There are a previous hash, transaction details, nonce, and target hash value. Financial institutions were the first to pay notice to it, as it was in simple words a new payment system.
Block is a place in a blockchain where data is stored. In the case of cryptocurrency blockchains, the data stored in a block are transactions. These blocks are chained together by adding the previous block's hash to the next block's header. It keeps the order of the blocks intact and makes the data in the blocks immutable.
A block is like a record of the transaction. Each time a block is verified, it gets recorded in chronological order in the main Blockchain. Once the data is recorded, it cannot be modified.
Relevant answer
Answer
Blocks are data structures within the blockchain database, where transaction data in a cryptocurrency blockchain are permanently recorded. A block records some or all of the most recent transactions not yet validated by the network. Once the data are validated, the block is closed. Then, a new block is created for new transactions to be entered into and validated.
A block is thus a permanent store of records that, once written, cannot be altered or removed...
Blocks and blockchains are not used solely by cryptocurrencies. They also have many other uses...
  • asked a question related to Algorithms
Question
9 answers
Hello Everyone, I want to perform division operation in Verilog - HDL. Please suggest me an algorithm for division in which the clock cycle taken by division operation is independent on input. That is for division of any number a (a can be any number) by b(b can be any number),same number of clock cycle wil be taken by division operation for different set of a and b.  
Relevant answer
Answer
Here is a useful link that I found, with the block diagrams and Verilog codes
  • asked a question related to Algorithms
Question
36 answers
🔴Human-GOD coevolution & the Religion-type of the future (technology; genetics; medicine; robotics; informatics; AI Algorithm & Quantum PC;..)🔴
Gentle RG-readers,
according to (possible) technological and informatics evolution of Homo sapiens from 2000 to 2100,
what Religion-type show the best resistance-resilience??
What will be the main Religion-type in the future??
--One GOD.
--Multiple and diverse GOD.
--Absence of Transcendence.
Moreover: GOD will be an evolutionary step of Human??
In this link (chapter pag.40) there is a quasi-fantasy scenario ...
Other papers of this series
🟥
Relevant answer
Answer
Is this key🔴 area {Posterior Cortical “Hot Zone,”}, the brain zone were the protein of #self are stored?? A new paper🔻 shows some intriguing results. See VanGELO Assoluto for an extreme view...
🔻🔻Surge of neurophysiological coupling and connectivity of gamma oscillations in the dying human brain. PNAS, May 1, 2023, vol.120. PMID:37126719 -- DOI:10.1073/pnas.2216268120🔻🔻
Significance.--- Is it possible for the human brain to be activated by the dying process? We addressed this issue by analyzing the electroencephalograms (EEG) of four dying patients before and after the clinical withdrawal of their ventilatory support and found that the resultant global hypoxia markedly stimulated gamma activities in two of the patients. The surge of gamma connectivity was both local, within the temporo–parieto–occipital (TPO) junctions, and global between the TPO zones and the contralateral prefrontal areas. While the mechanisms and physiological significance of these findings remain to be fully explored, these data demonstrate that the dying brain can still be active. They also suggest the need to reevaluate role of the brain during cardiac arrest.
Abstract.--- The brain is assumed to be hypoactive during cardiac arrest. However, animal models of cardiac and respiratory arrest demonstrate a surge of gamma oscillations and functional connectivity. To investigate whether these preclinical findings translate to humans, we analyzed electroencephalogram and electrocardiogram signals in four comatose dying patients before and after the withdrawal of ventilatory support. Two of the four patients exhibited a rapid and marked surge of gamma power, surge of cross-frequency coupling of gamma waves with slower oscillations, and increased interhemispheric functional and directed connectivity in gamma bands. High-frequency oscillations paralleled the activation of beta/gamma cross-frequency coupling within the somatosensory cortices. Importantly, both patients displayed surges of functional and directed connectivity at multiple frequency bands within the posterior cortical “hot zone,” a region postulated to be critical for conscious processing. This gamma activity was stimulated by global hypoxia and surged further as cardiac conditions deteriorated in the dying patients. These data demonstrate that the surge of gamma power and connectivity observed in animal models of cardiac arrest can be observed in select patients during the process of dying.
  • asked a question related to Algorithms
Question
5 answers
How should a system of institutional control over the development of advanced artificial intelligence models and algorithms be built so that this development does not get out of hand and lead to negative consequences that are currently difficult to predict?
Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built?
Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand?
To this, should the development of artificial intelligence be under control? - the answer is probably obvious, i.e. that it should. What remains debatable, however, is how the system of institutional control of the development of advanced artificial intelligence models and algorithms should be structured so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee. Besides, if the question: should the development of artificial intelligence be controlled? - is answered in the affirmative, i.e. YES, then who should exercise this control? So, how should an institutional system of control over the development of advanced artificial intelligence models and algorithms and their applications be constructed, so that the potential and real future negative effects of dynamic and not fully controlled technological progress do not outweigh the positive ones. Well, at the end of March this year 2023, a number of new technology developers, artificial intelligence experts, besides businessmen, investors developing technology start-ups, including, among others, Apple co-founder Steve Wozniak and the founder or co-founder of such technology companies as PayPal, SpaceX, Tesla, Neuralink and the Boring Company, i.e. Elon Musk, Stability AI chief Emad Mostaque (maker of the Stable Diffusion image generator) and artificial intelligence researchers from Stanford University, Massachusetts Institute of Technology (MIT) and other AI universities and labs have called in a joint letter for at least a six-month pause in the development of artificial intelligence systems more capable than the GPT-4 published in March. The aforementioned letter acting as a kind of cautionary petition was published on the Future of Life Institute centre's website, advanced artificial intelligence could represent "a profound change in the history of life on Earth" and the development of this technology should be approached with caution. In this petition of sorts, there are warnings about the unpredictable consequences of the race to create ever more powerful models and complex algorithms that are key components of artificial intelligence technology. The aforementioned developers of leading technology companies suggest that the development of artificial intelligence should be slowed down temporarily, as the risk has now emerged that this development could slip out of human control. The aforementioned petition warns that an uncontrolled approach to AI development risks a deluge of disinformation, mass automation of work and even the replacement of humans by machines and a 'loss of control over civilisation'. In addition, the letter suggests that if the current rapid development of artificial intelligence algorithm systems gets out of hand, then the scale of disinformation on the Internet will increase significantly, the process of work automation already taking place will accelerate many times, which may lead to the loss of jobs for about 300 million people within the current decade and, as a consequence, may also lead to a kind of loss of human control over the development of civilisation. Developers of new technologies point out that advanced artificial intelligence algorithm systems should only be developed when the development of artificial intelligence is under full control, the effects of this development are positive and the potential risks are fully controllable. Developers of new technologies are calling for a temporary pause in the training of systems superior to OpenAI's recently released GPT-4 system, which, among other things, is capable of passing tests of various kinds at a level close to the best results passed by humans. The aforementioned letter also calls for the implementation of comprehensive government regulation and oversight of new models of advanced AI algorithms, so that the development of this technology does not overtake the creation of the necessary legal regulations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why do the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc., now call for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology takes place fully under control and does not get out of hand?
Should the development of artificial intelligence be controlled? And if so, who should exercise this control? How should an institutional control system for the development of artificial intelligence applications be built?
How should a system of institutional control of the development of advanced artificial intelligence models and algorithms be built, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee?
What do you think?
What is your opinion on the subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
The open letter[1] is wishful thinking at best. Let us sample some of the statements provided in the letter:
  • "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
The concept of manageable risk differs from individual to individual much less gather consensus on a population on what is a " manageable" risk
  • "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
This is the most naive statement of the letter. What can be accomplished in 6 months? We have not even gathered consensus on the simple philosophical approach to ethics in the history of philosophy, much less on how to approach AI from an ethical standpoint. stopping AI research for 6 months will not have a significant impact (there are even journals dedicated to AI ethics in print tackling the issue for several years and there is pretense to solve this in 6 months!).
  • "implement a set of shared safety protocols ...These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."
The concept of beyond reasonable doubt will eventually fall on what is reasonable practice in the field and this leads to weak safety protocols.
  • "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
This statement cannot be more ambiguous in term definition and the references of the letter do not help.
The concept of stopping AI research should not be confused with regulating the commercial use of the technology. The letter aims at the wrong issue. Research cannot be stopped for ML, since anyone with a reasonable amount of computing power (which you can buy anyway in amazon) can continue to work in the subject. But, regulating business is easier since the major players will have the greatest impact in slowing down the misuse of the technology. also by regulating business more targeted restrictions on its use can be applied rather than vague lists of research topics.
References
  • asked a question related to Algorithms
Question
4 answers
(say for example 200)
LMS algorithm for Adaptive linear prediction.
Relevant answer
Answer
  • asked a question related to Algorithms
Question
10 answers
Dear colleagues,
I was wondering whether there are any ways or any softwares that I can use to analysis the muscle cross sectional area in my H&E histology images?
I tried to use ImageJ thresholding, unfortunately it does not work efficiently for me. Thus, I was wondering whether there are any currently established methods.
Thank you very much in advance.
Relevant answer
Answer
I also want to calculate muscle fibers CSA in H&E staining using imageJ. Thresholding cannot be used but manually it can be done and it will be time taking process. I want to know about how many and how the fibers are randomly selected in a sample field, how many sample fields will be needed and what should be the magnification?
  • asked a question related to Algorithms
Question
13 answers
I am searching for some algorithms for feature extraction from images which I want to classify using machine learning. I have heard only about SIFT, I have images of buildings and flowers to classify. Other than SIFT, what are some good algorithms.
Relevant answer
Answer
It depends on features you are trying to extract from the image. Another feature extraction technique you can use is Histogram of Oriented Gradient(HOG) which counts the occurrence of gradient orientation in a localized portion of the image. It has proven to have good recognition accuracy when using machine learning algorithms.
  • asked a question related to Algorithms
Question
9 answers
I have seen the implementation of L-BFGS-B by authors in Fortran and ports in several languages. I am trying to implement the algorithm on my own.
I am having difficulty grasping a few steps. Is there a worked out example using L-BFGS or L-BFGS-B ? Something similar to (attached link) explaining the output of each step in an iteration for a simple problem.
  • asked a question related to Algorithms
Question
9 answers
I am analysing the data collected by Questionnaire survey, which consists socio demographic as well as likert scale based questions related to satisfaction with public transport. I am developing predictive model to predict Public perceptions to use the public Transport based on their socio demographic and satisfaction level.
I could not found any related reference to CITE. Therefore, I wanna make sure that my study direction is in right direction.
Relevant answer
Answer
Shahboz Sharifovich Qodirov Thanks for your suggestions.
  • asked a question related to Algorithms
Question
13 answers
Kindly suggest which routing algorithm is better to implement for finding the optimal route in wireless adhoc networks? 
Performance criteria :end to end delay , packet delivery ratio, throughput
Relevant answer
Answer
There is no specific answer to your question. to choose the best routing algorithm in an Ad hoc network you must specify the type of application, the size of the network, and the mobility model.
The most known routing protocols are
1- AODV and DSR as reactive protocols.
2- OLSR and DSDV as proactive Protocols.
3- ZRP and TORA as Hebrid Protocols.
I recommend to read and cite the related paper
  • asked a question related to Algorithms
Question
3 answers
I want to understand C5.0 algorithm for data classification , is there any one have the steps for it or the original paper that this is algorithm was presented in ?
Relevant answer
  • asked a question related to Algorithms
Question
14 answers
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
Relevant answer
Answer
The last few decades have seen the introduction of a large number of "novel" metaheuristics inspired by different natural and social phenomena. While metaphors have been useful inspirations, I believe this development has taken the field a step backwards, rather than forwards. When the metaphors are stripped away, are these algorithms different in their behaviour? Instead of more new methods, we need more critical evaluation of established methods to reveal their underlying mechanics.
  • asked a question related to Algorithms
Question
7 answers
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
Relevant answer
Answer
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
  • asked a question related to Algorithms
Question
20 answers
I am trying to find the best case time complexity for matrix multiplication.
Relevant answer
  • asked a question related to Algorithms
Question
2 answers
Its going to be a huge shift for marketers, tracking identity is tricky at the best of times with online/offline and multiple channels of engagement - but when the current methods of targeting, measurement and attribution get disrupted, its going to be extremely difficult to get identity right to deliver exceptional customer experiences whilst getting compliance right.
We have put our framework and initial results show promising measurement techniques including Advanced Neo-classical fusion models (borrowed from Financial industry, Biochemical Stochastic & Deterministic frameworks) and applied Bayesian and Space models to run the optimisations. Initial results are looking very good and happy to share our wider thinking thru this work with everyone.
Link to our framework:
Please suggest how would you be handling this environmental change and suggest methods to measure digital landscape going forward.
#datascience #analytics #machinelearning #artificialintelligence #reinforcementlearning #cookieless #measurementsolutions #digital #digitaltransfromation #algorithms #econometrics #MMM #AI #mediastrategy #marketinganalytics #retargeting #audiencetargeting #cmo
Relevant answer
Answer
Here are a few ideas about how marketers can do this:
• Encourage site login by better authenticated experiences or other consumer-oriented rewards to increase the number of persistent IDs.
• Create a holistic customer view by combining customer and other owned first-party data (e.g., web data) and establishing a persistent cross-channel customer ID.
• Allow customer segmentation, targeting, and measurement across all organizations and platforms. Measurement and audience control can be supported by integrating martech and ad tech pipes wherever possible.
  • asked a question related to Algorithms
Question
7 answers
Apart from Ant Colony Optimization, Can anyone suggest any other Swarm based method for Edge Detection of Imagery?
  • asked a question related to Algorithms
Question
7 answers
Hi ALL,
I want to use a filter for extraction of only ground points from the airborne LiDAR point cloud. Point clouds are for urban areas only. Which filter or algorithm or filter or software is considered to be the best for this purpose. Thanks
Relevant answer
Answer
You may use the freely available ALDPAT software, which implements several ground filtering methods. You may also want to use the CloudCompare software, which incorporates the CSF algorithm, one of the most efficient ground filtering procedures proposed so far.
  • asked a question related to Algorithms
Question
8 answers
This is related to Homomorphic encryption. These three algorithms are used in additive and multiplicative homomorhism. RSA and El gamal is multiplicative and Pallier is additive.Now i want to know what is the time complexity of these algorithms.
Relevant answer
Answer
Want the encryption and decryption time complexity when used by pallier cryptosystem
  • asked a question related to Algorithms
Question
22 answers
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....   
Relevant answer
Apart from the foregoing mentioned discussion, all metaheuristic optimization approaches are alike on average in terms of their performance. The extensive research studies in this field show that an algorithm may be the topmost choice for some norms of problems, but at the same, it may become to be the inferior selection for other types of problems. On the other hand, since most real-world optimization problems have different needs and requirements that vary from industry to industry, there is no universal algorithm or approach that can be applied to every circumstance, and, therefore, it becomes a challenge to pick up the right algorithm that sufficiently suits these essentials.
A discussion of this issue is at section two of the following reference:
  • asked a question related to Algorithms
Question
25 answers
Since the early 90’s, metaheuristic algorithms have been continually improved in order to solve a wider class of optimization problems. To do so, different techniques such as hybridized algorithms have been introduced in the literature. I would be appreciate if someone can help me to find some of the most important techniques used in these algorithms.
- Hybridization
- Orthogonal learning
- Algorithms with dynamic population
Relevant answer
The following current-state-of-the-art paper has the answer for this question:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Or simply refer to the same paper at the following address:
  • asked a question related to Algorithms
Question
39 answers
May I ask what is the current state-of-the-art for incorporating FEM with Machine Learning algorithm? My main questions, to be specific, are:
  1. Is it a sensible thing to do? Is there an actual need for this? 
  2. What would be the main challenges? 
  3. What have people tried in the past? 
Relevant answer
Answer
Very good questions!
Here is my response based on my experience in developing state-of-the-art FE schemes for solid mechanics, fluid mechanics and multiphysics.
1.) Is it a sensible thing to do? Is there an actual need for this?
No. I don't think so.
You are one of the few who asked this question. ML for FEM, or for that PDEs altogether, is just part of the ongoing craze about ML/AI.
2.) What would be the main challenges?
ML is nothing but curve/surface fitting. Nothing new mathematically. Difficulties are even worse than what we face with the least-squares finite element method because of poorly conditioned matrices. No one talks about this because many don't even know such issues exist. Just GIGO.
The main challenge is getting access to a huge amount of computing power, especially GPUs. Generating the data is not an issue since it is done by running a lot of direct numerical simulations.
3.) What have people tried in the past?
Some tried and are still trying, but mostly nonsense if you know FEM. Gets published because it is the "trend".
No one considers the cost incurred in training the models in the discussion.
More or less generates 1000s of data sets to train a model which will subsequently be used for 10s of simulations, wasting about 90% of resources.
I have not seen anyone demonstrating the applicability of such ML models for changes in geometries (addition/removal of holes, fillets etc.), topologies (solid/solid contact and fracture), mesh (coarsening/refining, different element shapes), and constitutive models. Most probably due to obvious reasons.
  • asked a question related to Algorithms
Question
4 answers
There are PIDs but usually only the Proportional part of the PID algorithm is usually used
Mapping systems, as used in diesel engines
But make a several layer PIDs is difficult.
Map based systems (as example used in turbines or diesel engines) needs a lot of testing and works usually with new machines in controlled conditions
It would be better using an algorithm that adapt and slow increases or decreases control signal in order to obtain maximum performance.
Also some algorithm should advise of modifications out of expected values to advise about problems, making an efficient diagnosys of the system
I should need to use this kind of algorithm to control my simulations to reduce number of simulations but also to control my Miranda and Fusion Reactors
Perhaps some of the algorithms can be: Neural Networks, MultiLayer Perceptrons (MLP) and Radial Basis Function (RBF) networks. Also the new Support Vector Regression (SVR)
Relevant answer
Answer
I made an algorithm that theoretically reaches the end solution in the minimum time:
1. Set delta = (max-min)/2 for every parameter
2. The algorithm varied from center value to +1/2 delta and -1/2 delta one of the parameters and see which result is the best, then center that value to that
3. The same with all the parameter
4. Divide delta/2
5. Goto 2 until delta=minimum
The problem is that vary so much one parameter in one REAL machine it would be broken or stopped
Perhaps it is a better solution to go from the center, use delta=minimum delta, and going up multiply by 2 every time, then begin to divide again by 2 when entering in a second condition (to be defined)
  • asked a question related to Algorithms
Question
14 answers
I would be grateful for suggestions to solve the following problem.
The task is to fit a mechanistically-motivated nonlinear mathematical model (4-6 parameters, depending on version of assumptions used in the model) to a relatively small and noisy data set (35 observations, some likely to be outliers) with a continuous numerical response variable. The model formula contains integrals that cannot be solved analytically, only numerically. My questions are:
1. What optimization algorithms (probably with stochasticity) would be useful in this case to estimate the parameters?
2. Are there reasonable options for the function to be optimized except sum of squared errors?
Relevant answer
Answer
Dear;
You can solve the integrals manually or by a software.
Regards
  • asked a question related to Algorithms
Question
6 answers
in pso algorithm for solving an issue with chenging the sequents of element of representative solution ,all elements will chenge and because of elemants dependence to each other, this algorithm didn't go to a
converge optimized answer
how can we solve this problem and converge the algorithm?
-----
representative solution Included two parts. 
in  first part Included permutation of integers that coded  continuous 
between 0 and 1 
and in fitness function will be decoded to integers 
and second part included continuous  number between 0 and 1 
direction of second part is depended to values of the first part
----
there is no problem with duplicate answer in permutation because of considering fixer procedure 
Relevant answer
Answer
Dear;
With the increasing demands in solving larger dimensional problems, it is necessary to have efficient algorithm. Efforts were put towards increasing the efficiency of the algorithms. This paper presents a new approach of particle swarm optimization with cooperative coevolution.
Article :
A New Particle Swarm Optimizer with Cooperative Coevolution for Large Scale Optimization
Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014.
Regards
  • asked a question related to Algorithms
Question
4 answers
For Example, I have a South Carolina map comprising of 5833 grid points as shown below in the picture. How do I interpolate to get data for the unsampled points which are not present in 5833 points but within the South Carolina(red region in the picture) region? Which interpolation technique is best for a South Carolina region of 5833 grid points?
Relevant answer
Answer
Dear Vishnu,
in which format is the data, which you would like to interpolate, available: NetCDF, ASCII-Text, Excel, ... ?
  • asked a question related to Algorithms
Question
13 answers
In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
Relevant answer
Answer
In my experience in suggest for this problem to use supervised by using the Artificial Neural Networks, we can you find different architecture have the background or inspiration from comportment of human, or example NN, CNN, RNN... etc.
I hope that be Claire for you. @ Issa Annamoradnejad
  • asked a question related to Algorithms
Question
8 answers
I want to learn more about the time complexity and big-O notation of the algorithm
What are the trusted books and resources I can learn from?
Relevant answer
Answer
I highly recommend Introduction to algorithms. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms. MIT press.
  • asked a question related to Algorithms
Question
1 answer
In an AEC configuration as shown in the Fig.1, let us consider u(n), the far-end speech signal and x(n), the near-end speech signal. The desired signal is d(n)=v(n)+x(n), where v(n) is the echo signal generated from the echo path impulse response. The purpose of an adaptive filter W is to find an echo estimate, y(n) which is then subtracted from the desired signal, d(n) to obtain e(n).
When I am implementing the APA adaptive algorithm for echo cancellation, I am observing leakage phenomenon which is explained as follows:
y(n) will contain a component proportional to x(n), that will be subtracted from the total desired signal. This phenomenon is in fact a leakage of the x(n) in y(n), through the error signal e(n); the result consists of an undesired attenuation of the near-end signal.
Because of this near-end leakage phenomenon, there is near-end signal suppression in the post-processing output.
I am handling the double-talk using robust algorithms without the need for DTD.
I kindly request to suggest me how to avoid near-end signal leakage in the adaptive filter output y(n)?
Relevant answer
Answer
To avoid this problem, there are two methods, using DTD, or by modifying your algorithm in its updates by incorporating the statistics of near-end signal to control the output, as done by many authors (such as Variable Step Size).
  • asked a question related to Algorithms
Question
6 answers
Can anyone suggest a data compression algorithm to compress and regenerate data from sensors(eg. accelerometer-it consists of time & acceleration) that are used to obtain structural vibration response?I have tried using PCA but i am unable to regenerate my data.Kindly suggest a suitable method or some other algorithm to go with PCA?
Relevant answer
Answer
Saranya Bharathi I am wandering , which algorithm did you use ?
  • asked a question related to Algorithms
Question
7 answers
What will be a suitable model for solving the regression problem? Is there any hybrid algorithm or new model/framework exist to solve this problem in deep learning. How much deep learning is promising for regression tasks.
Relevant answer
Answer
It is very similar to the use of deep learning for the classification problem. Just you use different layers at the end of the network. e.g. in CNN instead of a softmax layer and cross-entropy loss, you can use a regression layer and MSE loss, etc.
It will be as useful as deep classification networks. But it depends on your data and problem. RNNs (especially LSTMs) are useful for time-series and sequential data such as speech, music, and other audio signals, EEG and ECG signals, stock market data, weather forecasting data, etc.
If you are using MATLAB, here are two examples (CNN and LSTM) for the implementation:
  • asked a question related to Algorithms
Question
5 answers
I need implement the epsilon constraint method for solve multi objective optimization problems, but I don´t know how to choose each epsilon interval and neither when to terminate the algorithm, that is to say the stopping criteria.
Relevant answer
Answer
Hi my dear friend
Good day!
I think the below book will help you a lot to provide relevant codes.
Messac, A. (2015). Optimization in practice with MATLAB®: for engineering students and professionals. Cambridge University Press.
best regards.
Saeed Rezaeian-Marjani
  • asked a question related to Algorithms
Question
3 answers
It seems that the quadprog function of MATLAB, the (conventional) interior-point algorithm, is not fully exploiting the sparsity and structure of the sparse QP formulation based on my results.
In Model Predictive Control, the computational complexity should scale linearly with the prediction horizon N. However, results show that the complexity scales quadratically with the prediction horizon N.
What can be possible explanations?
Relevant answer
Answer
Thanks for your answer.
I gave the wrong information. Quadprog is not exploiting the sparsity and structure of the sparse QP formulation at all. So, the computation complexity scales cubically with N.
The barrier algorithm of Gurobi was not fully exploiting the sparsity and structure of the sparse QP formulation. So, the computation complexity scales cubically with N. Do you have some documentation about this? At the Gurobi website, I could not find anything relevant to answer this question.
Could you maybe also react to my last topic about why interior point algorithm of Gurobi and quadprog give same optimized values x and corresponding objective function value , but the first-order solver GPAD gives the same optimized values x, but another objective function value which is a factor 10 bigger?
Your answers are always really helpful.
Regards
  • asked a question related to Algorithms
Question
13 answers
When doing machine learning, do we normally use several algorithms for comparison? For example, if the RMSE of SVM is 0.1, how do I come up with the conclusion that this model performed well? Just based on saying RMSE value is low, so the result is good? But if there is no comparison, how do I say it is low?
Or shall I include other algorithms e.g. random forest etc to do a comparison of the value? I intended to use only SVM regression, but now I am a bit stuck at the interpretation of the results.. Thank you in advance!
Relevant answer
Answer
Sorry to differ, even if one's contribution is a unique method, it is still better that he/she compares different regression models to complete the picture (recommendation of the best regression model that can accompany his/her method, cannot be done without comparison).
  • asked a question related to Algorithms
Question
5 answers
Dear all,
I have a watershed and wish to randomly split the watershed into different 'artificial' farms, and the farm area should follow an exponential distribution as found empirically from other studies. The 'artificial' farm could be rectangular or any other shape.
Is there any way to do this in GIS or other software? Any method achieved through shapefile or raster can be accepted.
Thank you!
Pan
Relevant answer
Answer
An interesting sampling problem. I guess the bottom line is the brute force MCS with Delaunay triangulation. Adding some regularization to make it an optimization problem may work better. Sorry for not being very helpful.
  • asked a question related to Algorithms
Question
39 answers
Hell, everyone. I am a student of electrical engineering and my research field is related to the optimization of a power system.
I know that the algorithm that we should choose depends on our problem but there are lots of heuristics, metaheuristic algorithms available to choose from. It will also take some time to understand a specific algorithm and after that maybe we came to know that the chosen algorithm was not the best for my problem. So as per my problem how can I choose the best algorithm?
Is there any simple solution available that can save my time as well?
Thank you for your precious time.
Relevant answer
Answer
As most people have indicated the best solution depends on the 'surface' you are optimising and the number of dimensions. If you have a large number of dimensions and a smooth surface then traditional methods that use derivatives (or approximations to derivatives) work well such as the Quasi-Newton Method. If there are a small number of dimensions and the surface is fairly sensible but noisy then the Nelder and Mead Simplex works well. For higher dimensions with noise but still farily sensible (hill like) then simulated annealing works. The surfaces which are discontinuous and mis-leading are best addressed with the more modern heuristic techniques such as evolutionary algorithms. If you are trying to find a pareto-surface then use a multi-objective genetic algorithm. So the key things are how many dimensions, is the surface reasonably smooth (reliable derivatives), do you want a pareto surface or can you run multiple single criterion optimisations. The other questions is, do you need to know the optimum or do you just want a very good result. There are often good algorithms for approximations to the best result, for example using a simplified objective function which can be found much faster to get a good rough solution which may be the starting point for a high fidelity solution. Sorry if this indicates it is complex, it really does depend on the solution space. Do not forget traditonal mathematical methods used in Operational Research as well. Good Luck!
  • asked a question related to Algorithms
Question
4 answers
i want to execute apriori algorithm of association rule in datamining through MATLAB
Relevant answer
Answer
I work in the same topic and I use
it works with small dataset but with large database like mushroom or BMS1 or other datasets it takes time.
who has solution about that?@ Shafagat Mahmudova
  • asked a question related to Algorithms
Question
2 answers
I'm trying to implement a fall detection algortihm written in C in a Zybo board.
I am using Vivado HLS. 
I don't know how to start even if already did the tutorials related to Zynq7000.
Thank you for any help.
Relevant answer
Answer
V. Carletti, A. Greco, A. Saggese, and M. Vento, "A smartphone-based system for detecting falls using anomaly detection," in ICIAP 2017 Proceedings, 2017. [Online] Available: https://link.springer.com/chapter/10.1007/978-3-319-68548-9_45
V. Carletti, A. Greco, V. Vigilante, and M. Vento, "A wearable embedded system for detecting accidents while running," in VISAPP 2018 - Proceedings of the International Conference on Computer Vision Theory and Applications, 2018. [Online] Available: https://www.scitepress.org/Papers/2018/66128/66128.pdf
  • asked a question related to Algorithms
Question
2 answers
I am currently working on a binary classification of EEG recordings and I came across with CSP.
As far as I understand, CSP allows you to choose the best features by maximizing the variance between two classes, which is perfect for what I'm doing. Here follow the details and questions:
- I have N trials per subject, from which half belongs to class A and the other half to class B.
- Let's say I want to apply CSP to this subject trials. From what I understood, I should apply CSP to all my trials (please correct me if I'm wrong here). Do I arbitrarily choose which trial from class A to compare with one from class B? Is the order by which I do it, indifferent?
- After CSP I should get the projection matrix (commonly wrote at W), from which I can obtain the transformed signal and compute the variances (part of which will be my features). Why does the computation of the variance is transformed into a log function in most papers?
Thank you very much
Relevant answer
Answer
The projection matrix W is essentially the eigen-decomposition of the covariance of the data matrix X. Alternatively this can also be done using singular value decomposition (SVD) which is a more efficient way of handling high dimensional data as this avoids calculation of the large matrix XT.X. Apply SVD to the entire dataset and then make a plot of the cumulative sum of the SVs against the number of SVs. This can help in selecting the proper number of SVs from the plot which accounts for say 95% of the variances in the data. Log-scale specifies relative changes, while linear-scale specifies absolute changes.
  • asked a question related to Algorithms
Question
4 answers
For lightweight encryption.
Relevant answer
Answer
Respected Sir,
Please send me the source code of your project.
  • asked a question related to Algorithms
Question
3 answers
I am working on optimizing well placement in the condensate reservoir model using an algorithm. Any kind of code example will be appreciated.
Relevant answer
  • asked a question related to Algorithms
Question
3 answers
Hello,
I am working with a convex hull in n-dimensions (n>3) and I am having problems generating points on the convex hull surface. Ideally, I would like the points to be uniformly distributed or almost uniformly distributed. I am mostly looking for something simple to understand and implement.
(I am using scipy.spatial.ConvexHull python library)
Any help would be greatly appreciated :)
edit: Thank you very much for the answers already given.:) I have reformulated the question hoping to remove any confusion.
Thanks,
Noemie
Relevant answer
Answer
Dear Noemie,
I don't think that the question can really be answered without posing it in a more precise manner. First and foremost would be your expectations about the meaning of a uniform distribution of points on a complex hull...
However, if you are interested in convex hulls that have been generated from some process (and aren't easily relatable functions) then my thought would be to adapt one of the convex hull finding algorithms and take a series of random walks along the boundaries from some set of known initial points. These could be made to approximate whichever expected distances and variances you decide you are looking for.
An alternate approach would be to "walk" the boundary according to a n space grid again using one of the boundary finding algorithms and randomly select grid points. These could then be perturbed to correspond with your planned distance metric and distribution.
Good luck
  • asked a question related to Algorithms
Question
6 answers
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
public static List encrypt(String policy, int secLevel, String type, byte[] data, int ITERS){ double results[] = new double[ITERS]; DETABECipher cipher = new DETABECipher(); long startTime, endTime; List list = null; for (int i = 0; i < ITERS; i++){ startTime = System.nanoTime(); list = cipher.encrypt(data, secLevel,type, policy); endTime = System.nanoTime(); results[i] = (double)(endTime - startTime)/1000000000.0; } return list; } public List encrypt(byte abyte0[], int i, String s, String s1) { AccessTree accesstree = new AccessTree(s1); if(!accesstree.isValid()) { System.exit(0); } PublicKey publickey = new PublicKey(i, s); if(publickey == null) { System.exit(0); } AESCipher.genSymmetricKey(i); timing[0] = AESCipher.timing[0]; if(AESCipher.key == null) { System.exit(0); } byte abyte1[] = AESCipher.encrypt(abyte0); ABECiphertext abeciphertext = ABECipher.encrypt(publickey, AESCipher.key, accesstree); timing[1] = AESCipher.timing[1]; timing[2] = ABECipher.timing[3] + ABECipher.timing[4] + ABECipher.timing[5]; long l = System.nanoTime(); LinkedList linkedlist = new LinkedList(); linkedlist.add(abyte1); linkedlist.add(AESCipher.iv); linkedlist.add(abeciphertext.toBytes()); linkedlist.add(new Integer(i)); linkedlist.add(s); long l1 = System.nanoTime(); timing[3] = (double)(l1 - l) / 1000000000D; return linkedlist; } public static byte[] encrypt(byte[] paramArrayOfByte) { if (key == null) { return null; } byte[] arrayOfByte = null; try { long l1 = System.nanoTime(); cipher.init(1, skey); arrayOfByte = cipher.doFinal(paramArrayOfByte); long l2 = System.nanoTime(); timing[1] = ((l2 - l1) / 1.0E9D); iv = cipher.getIV(); } catch (Exception localException) { System.out.println("AES MODULE: EXCEPTION"); localException.printStackTrace(); System.out.println("---------------------------"); } return arrayOfByte; } public static ABECiphertext encrypt(PublicKey paramPublicKey, byte[] paramArrayOfByte, AccessTree paramAccessTree) { Pairing localPairing = paramPublicKey.e; Element localElement1 = localPairing.getGT().newElement(); long l1 = System.nanoTime(); localElement1.setFromBytes(paramArrayOfByte); long l2 = System.nanoTime(); timing[3] = ((l2 - l1) / 1.0E9D); l1 = System.nanoTime(); Element localElement2 = localPairing.getZr().newElement().setToRandom(); Element localElement3 = localPairing.getGT().newElement(); localElement3 = paramPublicKey.g_hat_alpha.duplicate(); localElement3.powZn(localElement2); localElement3.mul(localElement1); Element localElement4 = localPairing.getG1().newElement(); localElement4 = paramPublicKey.h.duplicate(); localElement4.powZn(localElement2); l2 = System.nanoTime(); timing[4] = ((l2 - l1) / 1.0E9D); ABECiphertext localABECiphertext = new ABECiphertext(localElement4, localElement3, paramAccessTree); ShamirDistributionThreaded localShamirDistributionThreaded = new ShamirDistributionThreaded(); localShamirDistributionThreaded.execute(paramAccessTree, localElement2, localABECiphertext, paramPublicKey); timing[5] = ShamirDistributionThreaded.timing; return localABECiphertext; } } public ABECiphertext(Element element, Element element1, AccessTree accesstree) { c = element; cp = element1; cipherStructure = new HashMap(); tree = accesstree; } public void execute(AccessTree accesstree, Element element, ABECiphertext abeciphertext, PublicKey publickey) { pairing = publickey.e; ct = abeciphertext; PK = publickey; countDownLatch = new CountDownLatch(accesstree.numAtributes); timing = 0.0D; double d = System.nanoTime(); Thread thread = new Thread(new Distribute(abeciphertext, accesstree.root, element)); thread.start(); try { countDownLatch.await(); long l = System.nanoTime(); timing = ((double)l - d) / 1000000000D; synchronized(mutex) { } } catch(Exception exception) { exception.printStackTrace(); } }
Relevant answer
Answer
That's a hardware issue and nothing else. Best, T.T.
  • asked a question related to Algorithms
Question
16 answers
In the ε-constraint method, one objective will be used as the objective function, and the remaining objectives will be used as constraints using the epsilon value as the bound. In this case:
- Do we need to apply penalty method to handle the constraint?
- How to select the best solution?
- How to get the final Pareto set?
Relevant answer
Answer
You can perform a web search on "epsilon constrained method multi objective" which provides quite many hits on Google scholar - and see what others have done in the near past. It is a quite popular tool.
You may look at the useful links below:
I hope it can help u!
  • asked a question related to Algorithms
Question
5 answers
Dear colleagues ,
If you consider a "complete dense" multivariate polynomial, Is there exists a Horner factorization scheme like for a classical polynomial ?
(by "complete-dense", I mean all the possible monom up to a given global order, the number of monom being given by the known formula with combination C_r^(n+r) if am correct).
Thx for answers
Relevant answer
Answer
Hope you find the following article is useful
Best regards
  • asked a question related to Algorithms
Question
2 answers
Intuitively, Maximum Likelihood inference on high frequency data should be slow, because of the large data set size. I was wondering if anyone has experience with slow inference, I can make optimization algorithms to speed up the infrence then.
I tried this with Yacine Ait Sahalia work on estimating diffusion models, using his code, which (Unfortunately!) is pretty fast, even for large data set. Now does any one know any large slow high frequency financial econometric problem do let me know,
Relevant answer
Answer
For large samples exact maximum likelihood can be approached reasonably well by faster estimation methods. But I do not understand why you want slow methods. As far as I know, Ait Sahalia code is good. Why do you say "(Unfortunately!)" ?
  • asked a question related to Algorithms
Question
4 answers
i want to study RPL in mWSN i am using NS2.35 Wsnet simulators, and cooja
can i find some algoritms source code in improving RPL?
Relevant answer
Answer
RPL is a IPv6 based Routing Protocol for Low-Power and Lossy Networks. A Low-Power and Lossy Networks (LLNs) are a class of network. Here both routers and their interconnects are constrained devices.(i.e) processing power, memory, and energy consumption.RPL routing based on following principle: Destination Oriented Acyclic Graphs or DODAGs. step 1 : Open the Contiki OS with Vmware worksation. An login into Contiki user password: user . step2 : Now open the terminal in contiki desktop and make the right directories to run the cooja simulator tools. In terminal,
Go to the Directory : cd “/home/user/contiki/tools/cooja” ----> Press Enter
Give Command in terminal : ant run ------> Press Enter
  After successful execution of above command. make file will build automatically and then Contiki Cooja Network simulator application tool will appear. It’s a blue color terminal.
  • asked a question related to Algorithms
Question
6 answers
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
Relevant answer
Answer
There are several journal for algorithms. Some of them are:
Algorithmica
The Computer Journal
Journal of Discrete Algorithms
ACM Journal of Experimental Algorithmics
ACM Transactions on Algorithms
SIAM Journal on Computing
ACM Computing Surveys
Algorithms
Close related:
Theoretical Computer Science
Information Systems
Information Sciences
ACM Transactions on Information Systems
Information Retrieval
International Journal on Foundations of Computer Science
Related:
IEEE Transactions on Information Theory
Information and Computation
Information Retrieval
Knowledge and Information Systems
Information Processing Letters
ACM Computing Surveys
Information Processing and Management
best regards,
rapa
  • asked a question related to Algorithms
Question
2 answers
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
Relevant answer
Answer
I've found a description with algorithm implemented in R for you. I hope it helps: http://www.joyofdata.de/blog/testing-linear-separability-linear-programming-r-glpk/
Another nice description with implementation in python you can check as well: https://www.tarekatwan.com/index.php/2017/12/methods-for-testing-linear-separability-in-python/
  • asked a question related to Algorithms
Question
17 answers
What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...
I would be grateful if you could reply by referring to valid scientific literature sources.
Relevant answer
Answer
All are approaches that exploits the computational intelligence paradigm. Machine learning is refered to data analitics. Evolutionary computation deal with optimization problems.
  • asked a question related to Algorithms
Question
2 answers
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
Relevant answer
Answer
The DP algorithm for LCS finds the solution in O(n*m) complexity. This is polynomial time. The reduction you are looking for is the problem itself. If you solve this problem in an alphabet with uncertain size, you solve it for size 2.
  • asked a question related to Algorithms
Question
3 answers
Dijkstra's algorithm performs the best sequentially on a single CPU core. Bellman-Ford implementations and variants running on the GPU outperform this sequential Dijkstra case, as well as parallel Delta-Stepping implementations on multicores, by several orders of magnitude for most graphs. However, there exist graphs (such as road-networks) that perform well only when Dijkstra's algorithm is used. Therefore, which implementation and algorithm should be used for generic cases?
Relevant answer
Answer
Maleki et al. achieved improvements over delta-stepping:
Saeed Maleki, Donald Nguyen, Andrew Lenharth, María Garzarán, David Padua, and Keshav Pingali. 2016. DSMR: A Parallel Algorithm for Single-Source Shortest Path Problem. In Proc.\ 2016 International Conference on Supercomputing (ICS '16). ACM, New York, NY, USA, Article 32, DOI: https://doi.org/10.1145/2925426.2926287.
At the end of the Abstract they write:
"Our results show that DSMR is faster than the best previous algorithm, parallel [Delta]-Stepping, by up-to 7.38x".
Page 9, col 1, line -3:
"Machines: Two experimental machines were used for the evaluation: a shared-memory machine with 40 cores (4 10-core Intel(R) Xeon^TM E7-4860) and 128GB of memory; the distributed[-]memory machine Mira, a supercomputer at Argonne National Lab. Mira has 49152 nodes and each node has 16 cores (PowerPC A2) with 16GB of memory."
Best wishes,
Frank
  • asked a question related to Algorithms
Question
33 answers
Hi, I have little experience with Genetic algorithm previously.
Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.
But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.
Thanks in advance.
Relevant answer
Answer
There are tons of algorithms. Here a list:
DE
PSO
ABC
CMA-ES, etc