PresentationPDF Available

Reducing the Cognitive Burden of a Soldier with the Help of Personal AI and LLM Assistant

Authors:
  • Central Scientific Research Insitute of Armaments and Military Equipment of Armed Forces of Ukraine

Abstract

In this presentation, the idea of creating a personal assistant to a soldier was proposed. The technical aspects of implementing such an idea based on a mixture of experts (MoE) LLM/SLM are discussed in combination with a description of typical use cases and functionality of a soldier's personal assistant.
The Human System Integration (HSI) virtual symposium
12 January 2024
1
VADYM SLYUSAR
Doctor of Sciences, Professor
12 January 2024
Reducing the Cognitive Burden
of a Soldier with the Help of
Personal AI and LLM Assistant
Key threats that impact to increasing the cognitive burden of soldiers
2
Increasing the cognitive burden of soldiers on the combat field can be attributed to various threats
and factors, each contributing to diminished decision-making capacity, reduced situational
awareness, and increased stress. Here are some key threats that affect soldiers' cognitive burden:
1. Information Overload. Modern warfare involves a massive influx of data from various sources like
drones, sensors, and communication from command. Processing this vast amount of information in
real-time can overwhelm soldiers, leading to decision fatigue and slower reaction times.
2. Environmental Stressors. Harsh environmental conditions such as extreme temperatures, high
altitudes, or loud noises can impair cognitive functions. Soldiers might have to contend with these
while also carrying out complex tasks, leading to increased mental fatigue.
3. Physical Exhaustion. Long periods of vigilance, carrying heavy equipment, and physical exertion
without adequate rest can lead to exhaustion. Tired soldiers are more likely to make errors, have
slower reaction times, and reduced problem-solving abilities.
4. Psychological Stress. The threat of injury or death, witnessing trauma, and the moral dilemmas
inherent in combat can lead to immense psychological stress. This can manifest as anxiety,
depression, or post-traumatic stress disorder (PTSD), all of which add to the cognitive load.
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
3
5. Technological Complexity. The increasing complexity of military technology requires soldiers to
have higher levels of training and cognitive engagement. Learning and operating advanced equipment
under stress can be cognitively demanding.
6. Multitasking Requirements. Modern combat often requires soldiers to perform multiple tasks
simultaneously, such as navigating, communicating, and engaging the enemy. Juggling these tasks
can lead to cognitive overload and increased error rates.
7. Sleep Deprivation. Operations that require extended vigilance without proper rest can lead to sleep
deprivation. This significantly impairs cognitive functions, including attention, memory, and decision-
making.
8. Chemical Exposures. Exposure to certain chemicals or substances, either intentionally through
chemical warfare or unintentionally through environmental exposure, can affect cognitive functioning
and overall brain health.
Mitigating these threats involves a combination of training, technology, and support systems designed
to reduce stress, provide timely and clear information, and ensure soldiers are physically and
psychologically prepared for the demands of combat.
Threats that impact to increasing the cognitive burden of soldiers (cont.)
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
GPT- 4
4
Large Language Model (LLM) is a machine learning algorithm capable of
generating and processing texts in natural human languages. These models are
based on neural networks with transformer or informer-transformer architecture
and are trained on large volumes of textual data. As a result, they can answer
questions, create texts, perform translations, and carry out other language
processing or multimodal generative AI tasks.
The Human System Integration (HSI) virtual symposium
12 January 2024
LLMs Evolution
https://arxiv.org/pdf/2308.14149.pdf
5
https://www.appypie.com/blog/evolution-of-language-models
The Human System Integration (HSI) virtual symposium
12 January 2024
Informer-transformer architectures
6
The Human System Integration (HSI) virtual symposium
12 January 2024
https://arxiv.org/abs/2305.10203
The Start Page of GPT4
https://chat.openai.com/auth/login
7
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT- 4v with GPTs
8
The Human System Integration (HSI) virtual symposium
12 January 2024
https://www-
files.anthropic.com/production/images/Model-
Card-Claude-2.pdf
9
Alternatives to GPT-4
The Human System Integration (HSI) virtual symposium
12 January 2024
Local LLMs
10
The Human System Integration (HSI) virtual symposium
12 January 2024
11
"Zoo" of local GPT-4 alternatives
SynthIA-7B-v1.3
LlongOrca-7B-16K
Tiny Llama
The Human System Integration (HSI) virtual symposium
12 January 2024
The LangChain library as a foundation for tuning LLMs
https://github.com/langchain-ai
LangChain, created by Harrison Chase
(https://github.com/hwchase17), was released in
October 2022.
It offers a comprehensive suite of resources for
developing applications powered by Large Language
Models (LLMs). LangChain facilitates the integration
of LLMs with external data sources, such as user's
personal documents or internet resources, enabling
more dynamic and informed interactions. 12
The Human System Integration (HSI) virtual symposium
12 January 2024
Comparing various local LLMs
13
https://mistral.ai/news/announcing-mistral-7b/
The Human System Integration (HSI) virtual symposium
12 January 2024
Small Language Models
(SLMs)
14
The Human System Integration (HSI) virtual symposium
12 January 2024
Technical aspects of integrating Gemini Nano into a
smartphone
15
https://markovate.com/blog/gemini-nano/
The Human System Integration (HSI) virtual symposium
12 January 2024
Gemini Nano use cases
16
https://markovate.com/blog/gemini-nano/
The Human System Integration (HSI) virtual symposium
12 January 2024
Benefits of use Gemini Nano
17
https://markovate.com/blog/gemini-nano/
The Human System Integration (HSI) virtual symposium
12 January 2024
Walter-StableLM-3B
18
Name Quant method
Size
walter
-stablelm-3b.fp16.gguf fp16 5.59 GB
walter
-stablelm-3b.q2_k.gguf q2_k 1.20 GB
walter
-stablelm-3b.q3_k_m.gguf q3_k_m 1.39 GB
walter
-stablelm-3b.q4_k_m.gguf q4_k_m 1.71 GB
walter
-stablelm-3b.q5_k_m.gguf q5_k_m 1.99 GB
walter
-stablelm-3b.q6_k.gguf q6_k 2.30 GB
walter
-stablelm-3b.q8_0.gguf q8_0 2.97 GB
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/afrideva/Walter-StableLM-3B-GGUF
StableLM-Zephyr-3B
19
Name Quant method Bits
Size Max RAM required
Use case
stablelm
-zephyr-3b.Q2_K.gguf Q2_K 2
1.20 GB
3.70 GB
smallest, significant quality loss
- not
recommended for most purposes
stablelm
-zephyr-3b.Q3_K_S.gguf
Q3_K_S 3
1.25 GB
3.75 GB
very small, high quality loss
stablelm
-zephyr-
3b.Q3_K_M.gguf
Q3_K_M 3
1.39 GB
3.89 GB
very small, high quality loss
stablelm
-zephyr-3b.Q3_K_L.gguf
Q3_K_L 3
1.51 GB
4.01 GB
small, substantial quality loss
stablelm
-zephyr-3b.Q4_0.gguf Q4_0 4
1.61 GB
4.11 GB
legacy
; small, very high quality loss - prefer
using
Q3_K_M
stablelm
-zephyr-
3b.Q4_K_M.gguf
Q4_K_M 4
1.71 GB
4.21 GB
medium
, balanced quality - recommended
stablelm
-zephyr-3b.Q5_0.gguf Q5_0 5
1.94 GB
4.44 GB
legacy
; medium, balanced quality - prefer
using
Q4_K_M
stablelm
-zephyr-
3b.Q5_K_M.gguf
Q5_K_M 5
1.99 GB
4.49 GB
large
, very low quality loss - recommended
stablelm
-zephyr-3b.Q6_K.gguf Q6_K 6
2.30 GB
4.80 GB
very
large, extremely low quality loss
stablelm
-zephyr-3b.Q8_0.gguf Q8_0 8
2.97 GB
5.47 GB
very
large, extremely low quality loss - not
recommended
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/TheBloke/stablelm-zephyr-3b-GGUF
TinyLlama-1.1B-Chat-v1.0
20
Name Quant method Bits Size Use case
TinyLl
ama-1.1B-Chat-v1.0-Q2_K.gguf Q2_K 2 482 MB
, significant quality loss - not
for most purposes
TinyLlam
a-1.1B-Chat-v1.0-Q3_K_L.gguf Q3_K_L 3 592 MB
, substantial quality loss
TinyLlama
-1.1B-Chat-v1.0-Q3_K_M.gguf Q3_K_M 3 550 MB
small, high quality loss
TinyLlama
-1.1B-Chat-v1.0-Q3_K_S.gguf Q3_K_S 3 499 MB
small, high quality loss
TinyLlama
-1.1B-Chat-v1.0-Q4_0.gguf Q4_0 4 637 MB
; small, very high quality loss -
Q3_K_M
TinyLlama
-1.1B-Chat-v1.0-Q4_K_M.gguf Q4_K_M 4 668 MB
, balanced quality -
TinyLlama
-1.1B-Chat-v1.0-Q4_K_S.gguf Q4_K_S 4 643 MB
, greater quality loss
TinyLlama
-1.1B-Chat-v1.0-Q5_0.gguf Q5_0 5 766 MB
; medium, balanced quality - prefer
Q4_K_M
TinyLlama
-1.1B-Chat-v1.0-Q5_K_M.gguf Q5_K_M 5 782 MB
, very low quality loss -
TinyLlama
-1.1B-Chat-v1.0-Q5_K_S.gguf Q5_K_S 5 766 MB
, low quality loss - recommended
TinyLlama
-1.1B-Chat-v1.0-Q6_K.gguf Q6_K 6 903 MB
large, extremely low quality loss
TinyLl
ama-1.1B-Chat-v1.0-Q8_0.gguf Q8_0 8 1.17 GB
large, extremely low quality loss -
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/second-state/TinyLlama-1.1B-Chat-v1.0-GGUF
LinguaMatic Tiny
21
Name Quant method Size
LinguaMatic
-Tiny-GGUF.Q2_K.gguf q2_k 483.12 MB
LinguaMatic
-Tiny-GGUF.Q3_K_S.gguf q3_k_s 500.32 MB
LinguaMatic
-Tiny-GGUF.Q3_K_M.gguf q3_k_m 550.82 MB
LinguaMatic
-Tiny-GGUF.Q3_K_L.gguf q3_k_l 592.50 MB
LinguaMatic
-Tiny-GGUF.Q4_K_S.gguf q4_k_s 643.73 MB
LinguaMatic
-Tiny-GGUF.Q4_K_M.gguf q4_k_m 668.79 MB
LinguaMatic
-Tiny.Q5_K_S.gguf q5_k_s 767.00 MB
LinguaMatic
-Tiny.Q5_K_M.gguf q5_k_m 783.02 MB
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/erfanzar/LinguaMatic-Tiny-GGUF/tree/main
Tiny Vicuna 1B
22
Name Quant method
Size
tiny
-vicuna-1b.q2_k.gguf q2_k 482.14 MB
tiny
-vicuna-1b.q3_k_m.gguf q3_k_m 549.85 MB
tiny
-vicuna-1b.q4_k_m.gguf q4_k_m 667.81 MB
tiny
-vicuna-1b.q5_k_m.gguf q5_k_m 782.04 MB
tiny
-vicuna-1b.q6_k.gguf q6_k 903.41 MB
tiny
-vicuna-1b.q8_0.gguf q8_0 1.17 GB
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/afrideva/Tiny-Vicuna-1B-GGUF
TinyAlpaca-v0.1
23
Name
Quant method
Size
tinyalpaca
-v0.1.q2_k.gguf
q2_k
482.14 MB
tinyalpaca
-v0.1.q3_k_m.gguf
q3_k_m
549.85 MB
tinyalpa
ca-v0.1.q4_k_m.gguf
q4_k_m
667.81 MB
tinyalpaca
-v0.1.q5_k_m.gguf
q5_k_m
782.04 MB
tinyalpaca
-v0.1.q6_k.gguf
q6_k
903.41 MB
tinyalpaca
-v0.1.q8_0.gguf
q8_0
1.17 GB
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/afrideva/TinyAlpaca-v0.1-GGUF
TinyMistral-248M-Evol-Instruct
24
Name
Quant method
Size
tinymistral
-248m-evol-instruct.q2_k.gguf q2_k
115.26 MB
tinymistral
-248m-evol-instruct.q3_k_m.gguf q3_k_m
130.08 MB
tinymistral
-248m-evol-instruct.q4_k_m.gguf q4_k_m
155.67 MB
tinymistral
-248m-evol-instruct.q5_k_m.gguf q5_k_m
179.23 MB
tinymistral
-248m-evol-instruct.q6_k.gguf q6_k
204.26 MB
tinymistral
-248m-evol-instruct.q8_0.gguf q8_0
264.32 MB
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/afrideva/TinyMistral-248M-Evol-Instruct-GGUF
Mistral 7B 0.2 LLM for iPhone
25
https://apps.apple.com/us/app/offline-chat-private-
ai/id6474077941
Introducing Offline Chat, the next-generation
Al ChatBot that runs entirely on your device
without the Internet. You can use it anywhere,
and your data stays private and secure. While
Offline Chat might not match the prowess of
top-tier online models due to inherent
memory and processing constraints, it stands
out as an engaging and versatile tool. It's
perfect for sparking creativity and assisting in
various tasks such as writing, though it's
advisable to verify facts independently. The
app requires a Pro iPhone with a minimum
of 6GB of RAM. Only the following devices
meet the requirement: - iPhone 15 Pro,
iPhone 14 Pro, iPhone 13 Pro, iPhone 12
Pro. For the technically oriented, the Al is a
fine tuned large language model based on
Mistral 7B 0.2, quantized to 3-bit.
The size of this LLM app. is 3.3 GB.
The Human System Integration (HSI) virtual symposium
12 January 2024
The AndesGPT SLM with 7 billion
parameters is only slightly inferior to GPT-4.
For processing 2,000-word text, AndesGPT
demonstrates a fast response time of 2.9
seconds, outperforming industry standards
by 2.5 times.
The neural network can generate abstracts
of up to 14,000 words, demonstrating
modeling capabilities that are 3.5 times
superior to competitors.
26
https://en.xiaomitoday.it/oppo-official-andesgpt-features-details.html
The Human System Integration (HSI) virtual symposium
12 January 2024
AndesGPT for Find X7В smartphones from OPPO
Mixture of Experts
Architecture
27
The Human System Integration (HSI) virtual symposium
12 January 2024
Mixture of Experts Concept
28
https://machinelearningmastery.com/mixture-of-experts/
The whole is greater than the sum of its parts!”
https://www.youtube.com/watch?v=3MX4RJbGIVQ
The Human System Integration (HSI) virtual symposium
12 January 2024
Mixture of Experts Concept
29
https://arxiv.org/abs/2211.15841
The Human System Integration (HSI) virtual symposium
12 January 2024
Mixture of Switch Experts Concept
30
https://arxiv.org/abs/2208.02813
The Human System Integration (HSI) virtual symposium
12 January 2024
Mixtral 8x7B = Mixture of 8 Mistral 7B
31
https://levelup.gitconnected.com/introducing-mixtral-8x7b-revolution-in-ai-language-models-better-then-chatgpt3-5-
llama-70b-336f85a4e24f
The Human System Integration (HSI) virtual symposium
12 January 2024
"Zoo" of x8 MoE LLMs
32
Sonya 7B X8 MoE Synthia-MoE-v3-Mixtral-8x7B Chupacabra-8x7B-experts
Starling-LM-alpha-8x7B-MoE
Neural-Chat-v3-3-8x7B-MoE
OpenChat 3.5-16k-8x7B-MoE
Falkor 7B MoE 8x7B Experts
TinyLlama-1.1B-Chat-v0.6-x8-MoE
Trinity-v1.2-x8-MoE
The Human System Integration (HSI) virtual symposium
12 January 2024
"Zoo" of x4 and x2 MoE LLMs
33
Lumosia-MoE-4x10.7B SOLARC-MOE-10.7Bx4 PiVoT-MoE-4x10.7B
GOAT-Adapt-MoE-4x7B
MixLLaVA-1-2x7B-MoE
(Visual Question-Answering)
Nous-Hermes-2-SOLAR-10.7B-x2-MoE
Phixtral-4x2_8
Phixtral-2x2_8
The Human System Integration (HSI) virtual symposium
12 January 2024
34
TinyLlama-1.1B-Chat-v0.6-x8-MoE
Name
Quant method
Size
tinyllama-1.1b-chat-v0.6-x8-moe.Q2_K.gguf q2_k
2.15 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q3_K_S.gguf q3_k_s
2.79 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q3_K_M.gguf q3_k_m
2.80 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q3_K_L.gguf q3_k_l
2.82 GB
tiyllama-1.1b-chat-v0.6-x8-moe.Q4_K_S.gguf q4_k_s
3.64 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q4_K_M.gguf q4_k_m
3.64 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q4_1.gguf q4_k_l
4.03 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q5_K_S.gguf q5_k_s
4.43 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q5_K_M.gguf q5_k_m
4.43 GB
tinyllama-1.1b-chat-v0.6-x8-moe.Q5_1.gguf q5_k_l
4.83 GB
The Human System Integration (HSI) virtual symposium
12 January 2024
https://huggingface.co/tsunemoto/TinyLlama-1.1B-Chat-v0.6-x8-MoE-GGUF
35
Examples of combinations of different LLMs within the MoE
Lumosia-MoE-4x10.7 MoE
made with the following models:
SOLARC-M-10.7B
PiVoT-10.7B-Mistral-v0.2-RP
Sakura-SOLAR-Instruct
CarbonVillain-en-10.7B-v1
SOLARC-MOE-10.7Bx4 MoE
is based on the SOLAR architectures:
Sakura-SOLAR-Instruct
SauerkrautLM-UNA-SOLAR-Instruct
SauerkrautLM-SOLAR-Instruct
UNA-SOLAR-10.7B-Instruct-v1.0
Phixtral-4x2_8 MoE has been made
with a custom version of
the mergekit library (Mixtral branch)
and the following configuration:
dolphin-2_6-phi-2
phi-2-dpo
phi-2-sft-dpo-gpt4_en-ep1
mrm8488/phi-2-codez
Next steps - combinations of
different LLMs and SLMs with
different sizes and different
quantization, that distributed on
the different devices in the
hierarchical structure
The Human System Integration (HSI) virtual symposium
12 January 2024
36
Transformer Encoder
MoE Transformer Encoder MoE Transformer Encoder with
multiple devices placement
https://arxiv.org/abs/2006.16668
The Human System Integration (HSI) virtual symposium
12 January 2024
37
Large and Small Language Models
for Military Use
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
Use LLM for Edge device cybersecurity
38
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
39
The Human System Integration (HSI) virtual symposium
12 January 2024
Use LLM to automatically generation of Urgent Voice Messages
(LUVM) (filling out standardized text forms and using text-to-speech
when necessary)
40
Call for
Fire
Generation special symbols to an emergency report
rotation/vibration/pulsations of symbols
Example:
0 This is Y12A
EMERGENCY FIRE MISSION
A. My position is: Grid 355 463
B. Target location is: 370 550
C. Elevation 250 m
D. 1506 mil, 1500 m
E. Target is mechanised platoon on the move
in open ground
F. Neutralise
G. For 2 minutes
H. At my command
The Human System Integration (HSI) virtual symposium
12 January 2024
41
Generation and Visualization an ESMRM information on Squad level
The Human System Integration (HSI) virtual symposium
12 January 2024
42
Recreation in augmented reality format of urban infrastructure that
existed before destruction during hostilities to improve terrain orientation
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
43
Platoon leader uses augmented reality to design trench infrastructure
before combat begins
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
44
Platoon leader uses augmented reality to design anti-tanks mine field
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
45
Commander uses augmented reality to design anti personnel mine field
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
Local versions of large
language models can be
used to support
commanders in decision-
making based on the
summarization of combat
reports
46
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
“Image2Text” and “Text2Image” functions of GPT- 4v
47
The Human System Integration (HSI) virtual symposium
12 January 2024
48
Application of LLM for mapping the situation based on
the transformation of the texts of reports and orders
The Human System Integration (HSI) virtual symposium
12 January 2024
49
Generation with the help of LLM of the texts of combat orders from
graphic images of the situation on the maps
The Human System Integration (HSI) virtual symposium
12 January 2024
A local version LLM will assist in servicing weapons systems and
military equipment to develop the best repair strategy and is capable
of acting as a voice assistant during the repair process
50
The Human System Integration (HSI) virtual symposium
12 January 2024
Cognitive measurement based on the Neurointerface Mind
tracker and LLM
51
https://eightify.app/summary/technology-and-business/boost-brain-productivity-with-neiry-mind-tracker-neurointerface-solution
LLM processing
The Human System Integration (HSI) virtual symposium
12 January 2024
Another use cases of a personal assistant of soldier
52
Drawing up a training
schedule
Weather forecast
Ballistic calculations
Soil properties forecast
Battery charge forecast
analytics
Medical advice
Route planning
Recommendations for
maintenance and repair of
equipment
Translator
Speech-to-Text / Text-to-Speech
Formation of standard messages
Summarization of reports
Text classification and
segmentation
Information filtering
The Human System Integration (HSI) virtual symposium
12 January 2024
The concept of
placing the LLMs MoE
parts on several
devices within a
soldiers unit under a
general control of
commander
53
The Human System Integration (HSI) virtual symposium
12 January 2024
GPT-4
Summary
The conducted studies proved the possibility of effective implementation of
the tasks of generation and processing of military-oriented texts with the help
of pre-trained LLMs & SLMs.
A comprehensive digitization of all existing texts is necessary to form a
metadata set for further training of local versions of the LLMs / SLMs.
An urgent task is to create portable local versions of LLMs & SLMs as a
personal soldiers assistant.
The general global trends are:
1. Development of hardware platforms to reduce the cost of implementing local
versions of LLMs/SLMs, scaling them to the level of Edge devices;
2. Borrowing MoE LLMs/SLMs architectures to address multimodal generative
AI tasks, including joint processing/generation of text, images, video, and
audio.
54
The Human System Integration (HSI) virtual symposium
12 January 2024
References
1. Reding, D.F. & Eaton, J. Science & Technology Trends 2020-2040. NATO Science & Technology
Organization, Offi ce of the Chief Scientist, Brussels, Belgium. URL: https://www.nato.int/nato_static_
fl2014/assets/pdf/2020/4/pdf/190422-ST_Tech_ Trends_Report_2020-2040.pdf.
2. Zoe Stanley-Lockman, Edward Hunter Christis. An Artificial Intelligence Strategy for NATO. 25 October
2021. URL: https://www.nato.int/docu/review/articles/2021/10/25/an-artificial-intelligence-strategy-for-
nato/index.html.
3. A.I. Shevchenko, O.V. Bilokobylsky, M.O. Vakulenko, A.S. Dovbysh, V. V. Kazimir, M.S. Klymenko, O.V.
Kozlov, Y.P. Kondratenko, D.V. Lande, O.P. Mintser, S.K. Ramazanov, A.A. Roskladka, A.M.
Sergienko, E.V. Sydenko, V.I. Slyusar, and all. Regarding the Draft Strategy Development of Artificial
Intelligence in Ukraine (2022 2030). // Artificial Intelligence, 2022, № 1, Pp. 8 – 74. DOI:
https://doi.org/10.15407/jai2022.01.008
4. Vadym Slyusar, Mykhailo Protsenko, Anton Chernukha, Vasyl Melkin, Olena Petrova, Mikhail Kravtsov,
Svitlana Velma, Nataliia Kosenko, Olga Sydorenko, Maksym Sobol. Improving a neural network model
for semantic segmentation of images of monitored objects in aerial photographs.// Eastern-European
Journal of Enterprise Technologies.- № 6/2 (114). – 2021. - Pp. 86 95.DOI: 10.15587/1729-
4061.2021.248390.
55
The Human System Integration (HSI) virtual symposium
12 January 2024
THANK YOU
FOR YOUR ATTENTION !
swadim@ukr.net
www.slysuar.kiev.ua
https://scholar.google.com.ua/citations?hl=ru&user=wSegaWsAAAAJ
https://orcid.org/0000-0002-2912-3149
https://www.scopus.com/authid/detail.uri?authorId=7004240035
https://www.researchgate.net/profile/Vadym-Slyusar 56
The Human System Integration (HSI) virtual symposium
12 January 2024
ResearchGate has not been able to resolve any citations for this publication.
Article
In the article, the project of the Strategy for the Development of Artificial Intelligence in Ukraine for the 2022-2030 years, which was created by the Institute of Artificial Intelligence Problems of the Ministry of Education and Science of Ukraine and the National Academy of Sciences of Ukraine, this proposed for discussion and suggestions. The project takes into account the strategies for the development of artificial intelligence of various countries of the world, in particular the Strategy for NATO on Artificial Intelligence (2021), the Concept of the Development of Artificial Intelligence in Ukraine (approved by the Decree of the Cabinet of Ministers of Ukraine of December 2, 2020 No. 1556-r), as well as the long-term development of domestic scientific structures. The elements of the texts of the participants in the discussion of the Strategy project are provided separately in order to detail its individual provisions.
Science & Technology Trends 2020-2040. NATO Science & Technology Organization
  • D F Reding
  • J Eaton
Reding, D.F. & Eaton, J. Science & Technology Trends 2020-2040. NATO Science & Technology Organization, Offi ce of the Chief Scientist, Brussels, Belgium. URL: https://www.nato.int/nato_static_ fl2014/assets/pdf/2020/4/pdf/190422-ST_Tech_ Trends_Report_2020-2040.pdf.
An Artificial Intelligence Strategy for NATO
  • Zoe Stanley-Lockman
  • Edward Hunter Christis
Zoe Stanley-Lockman, Edward Hunter Christis. An Artificial Intelligence Strategy for NATO. 25 October 2021. URL: https://www.nato.int/docu/review/articles/2021/10/25/an-artificial-intelligence-strategy-fornato/index.html.
Maksym Sobol. Improving a neural network model for semantic segmentation of images of monitored objects in aerial photographs
  • Vadym Slyusar
  • Mykhailo Protsenko
  • Anton Chernukha
  • Vasyl Melkin
  • Olena Petrova
  • Mikhail Kravtsov
  • Svitlana Velma
  • Nataliia Kosenko
  • Olga Sydorenko
Vadym Slyusar, Mykhailo Protsenko, Anton Chernukha, Vasyl Melkin, Olena Petrova, Mikhail Kravtsov, Svitlana Velma, Nataliia Kosenko, Olga Sydorenko, Maksym Sobol. Improving a neural network model for semantic segmentation of images of monitored objects in aerial photographs.// Eastern-European Journal of Enterprise Technologies.-№ 6/2 (114). -2021. -Pp. 86 -95.DOI: 10.15587/1729-4061.2021.248390. 55