A aviation & planes forum. AviationBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » AviationBanter forum » rec.aviation newsgroups » Piloting
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Autonomous AI Equipped Flying Killer Robots Are Here! - Autonomous AI Equipped Flying Killer Robots_1579732425_kargu-en.pdf (0/1)



 
 
Thread Tools Display Modes
  #1  
Old June 4th 21, 04:55 PM posted to rec.aviation.piloting
Larry Dighera
external usenet poster
 
Posts: 3,953
Default Autonomous AI Equipped Flying Killer Robots Are Here! - Autonomous AI Equipped Flying Killer Robots_1579732425_kargu-en.pdf (0/1)


Video:
https://www.stm.com.tr/en/kargu-auto...tor-attack-uav

Autonomous AI Equipped Flying Killer Robots, what could possibly go wrong?
:-(

---------------------------------------------------------------------------
https://www.npr.org/2021/06/01/10021...n-autonomous-d

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

June 1, 20213:09 PM ET
JOE HERNANDEZ


A Kargu rotary-wing attack drone loitering munition system manufactured by
the STM defense company of Turkey. A U.N. report says the weapons system was
used in Libya in March 2020.
Emre Cavdar/STM

Military-grade autonomous drones can fly themselves to a specific location,
pick their own targets and kill without the assistance of a remote human
operator. Such weapons are known to be in development, but until recently
there were no reported cases of autonomous drones killing fighters on the
battlefield.

Now, a United Nations report about a March 2020 skirmish in the military
conflict in Libya says such a drone, known as a lethal autonomous weapons
system — or LAWS — has made its wartime debut. But the report does not say
explicitly that the LAWS killed anyone.

"If anyone was killed in an autonomous attack, it would likely represent an
historic first known case of artificial intelligence-based autonomous
weapons being used to kill," Zachary Kallenborn wrote in Bulletin of the
Atomic Scientists.
https://thebulletin.org/2021/05/was-...uite-possibly/

The assault came during fighting between the U.N.-recognized Government of
National Accord and forces aligned with Gen. Khalifa Haftar, according to
the report by the U.N. Panel of Experts on Libya.

"Logistics convoys and retreating [Haftar-affiliated forces] were
subsequently hunted down and remotely engaged by the unmanned combat aerial
vehicles or the lethal autonomous weapons systems such as the STM Kargu-2
.... and other loitering munitions," the panel wrote.

The Kargu-2
https://www.stm.com.tr/en/kargu-auto...tor-attack-uav
is an attack drone made by the Turkish company STM that can be operated both
autonomously and manually and that purports to use "machine learning" and
"real-time image processing" against its targets.

The U.N. report goes on: "The lethal autonomous weapons systems were
programmed to attack targets without requiring data connectivity between the
operator and the munition: in effect, a true 'fire, forget and find'
capability."

"Fire, forget and find" refers to a weapon that once fired can guide itself
to its target.

The idea of a "killer robot" has moved from fantasy to reality

Drone warfare itself is not new. For years, military forces and rebel groups
have used remote-controlled aircraft to carry out reconnaissance, target
infrastructure and attack people. The U.S. in particular has used drones
extensively to kill militants and destroy physical targets.

Azerbaijan used armed drones to gain a major advantage over Armenia in
recent fighting for control of the Nagorno-Karabakh region. Just last month,
the Israel Defense Forces reportedly used drones to drop tear gas on
protesters in the occupied West Bank, while Hamas launched loitering
munitions — so-called kamikaze drones — into Israel.

What's new about the incident in Libya, if confirmed, is that the drone that
was used had the capacity to operate autonomously, which means there is no
human controlling it, essentially a "killer robot," formerly the stuff of
science fiction.

Not all in the world of security are concerned.

"I must admit, I am still unclear on why this is the news that has gotten so
much traction," Ulrike Franke, a senior policy fellow at the European
Council on Foreign Relations, wrote on Twitter.

Franke noted that loitering munitions have been used in combat for "a while"
and questioned whether the autonomous weapon used in Libya actually caused
any casualties.

Jack McDonald, a lecturer in war studies at King's College London, noted
that the U.N. report did not make clear whether the Kargu-2 was operating
autonomously or manually at the time of the attack.

While this incident may or may not represent the first battlefield killing
by an autonomous drone, the idea of such a weapon is disquieting to many.

A global survey commissioned by the Campaign to Stop Killer Robots last year
found that a majority of respondents — 62% — said they opposed the use of
lethal autonomous weapons systems.
https://www.stopkillerrobots.org/202...robots-strong/
------------------------------------------------------------------------

https://thebulletin.org/2021/05/was-...uite-possibly/

Was a flying killer robot used in Libya? Quite possibly
By Zachary Kallenborn | May 20, 2021

A promotional video about autonomous weaponized drone. A screenshot from a
promotional video advertising the Kargu drone. In the video, the weapon
dives toward a target before exploding.

Last year in Libya, a Turkish-made autonomous weapon—the STM Kargu-2
drone—may have “hunted down and remotely engaged” retreating soldiers loyal
to the Libyan General Khalifa Haftar, according to a recent report by the UN
Panel of Experts on Libya. Over the course of the year, the UN-recognized
Government of National Accord pushed the general’s forces back from the
capital Tripoli, signaling that it had gained the upper hand in the Libyan
conflict, but the Kargu-2 signifies something perhaps even more globally
significant: a new chapter in autonomous weapons, one in which they are used
to fight and kill human beings based on artificial intelligence.

The Kargu is a “loitering” drone that can use machine learning-based object
classification to select and engage targets, with swarming capabilities in
development to allow 20 drones to work together. The UN report calls the
Kargu-2 a lethal autonomous weapon. It’s maker, STM, touts the weapon’s
“anti-personnel” capabilities in a grim video showing a Kargu model in a
steep dive toward a target in the middle of a group of manikins. (If anyone
was killed in an autonomous attack, it would likely represent an historic
first known case of artificial intelligence-based autonomous weapons being
used to kill. The UN report heavily implies they were, noting that lethal
autonomous weapons systems contributed to significant casualties of the
manned Pantsir S-1 surface-to-air missile system, but is not explicit on the
matter.)

Many people, including Steven Hawking and Elon Musk, have said they want to
ban these sorts of weapons, saying they can’t distinguish between civilians
and soldiers, while others say they’ll be critical in countering fast-paced
threats like drone swarms and may actually reduce the risk to civilians
because they will make fewer mistakes than human-guided weapons systems.
Governments at the United Nations are debating whether new restrictions on
combat use of autonomous weapons are needed. What the global community
hasn’t done adequately, however, is develop a common risk picture. Weighing
risk vs. benefit trade-offs will turn on personal, organizational, and
national values, but determining where risk lies should be objective.

It’s just a matter of statistics.



At the highest level, risk is a product of the probability and consequence
of error. Any given autonomous weapon has some chance of messing up, but
those mistakes could have a wide range of consequences. The highest risk
autonomous weapons are those that have a high probability of error and kill
a lot of people when they do. Misfiring a .357 magnum is one thing;
accidentally detonating a W88 nuclear warhead is something else.

There are at least nine questions that are important to understanding where
the risks are when it comes to autonomous weapons.

How does an autonomous weapon decide who to kill? Landmines—in some sense an
extremely simple autonomous weapon—use pressure sensors to determine when to
explode. The firing threshold can be varied to ensure the landmine does not
explode when a child picks it up. Loitering munitions like the Israeli Harpy
typically detect and home in on enemy radar signatures. Like with landmines,
the sensitivity can be adjusted to separate civilian from military radar.
And thankfully, children don’t emit high-powered radio waves.

But what has prompted international concern is the inclusion of machine
learning-based decision-making as was used in the Kargu-2. These types of
weapons operate on software-based algorithms “taught” through large training
datasets to, for example, classify various objects. Computer vision programs
can be trained to identify school buses, tractors, and tanks. But the
datasets they train on may not be sufficiently complex or robust, and an
artificial intelligence (AI) may “learn” the wrong lesson. In one case, a
company was considering using an AI to make hiring decisions until
management determined that the computer system believed the most important
qualification for job candidates was being named Jared and playing high
school lacrosse. The results wouldn’t be comical at all if an autonomous
weapon made similar mistakes. Autonomous weapons developers need to
anticipate the complexities that could cause a machine learning system to
make the wrong decision. The black box nature of machine learning, in which
how the system makes decisions is often opaque, adds extra challenges.

RELATED:
Worried about the autonomous weapons of the future? Look at what's already
gone wrong
What role do humans have? Humans might be able to watch for something going
wrong. In human-in-the-loop configurations, a soldier monitors autonomous
weapon activities, and, if the situation appears to be headed in a horrific
direction, can make a correction. As the Kargu-2’s reported use shows, a
human-off-the-loop system simply does its thing without a safeguard. But
having a soldier in the loop is no panacea. The soldier may trust the
machine and fail to adequately monitor its operation. For example, Missy
Cummings, the director of Duke University’s Human and Autonomy Laboratory,
finds that when it comes to autonomous cars, “drivers who think their cars
are more capable than they are may be more susceptible to increased states
of distractions, and thus at higher risk of crashes.”

Of course, a weapon’s autonomous behavior may not always be on—a human might
be in, on, or off the loop based on the situation. South Korea has deployed
a sentry weapon along the demilitarized zone with North Korea called the SGR
A-1 that reportedly operates this way. The risk changes based on how and
when the fully autonomous function is flipped on. Autonomous operation by
default obviously creates more risk than autonomous operation restricted
only to narrow circumstances.

What payload does an autonomous weapon have? Accidentally shooting someone
is horrible, but vastly less so than accidentally detonating a nuclear
warhead. The former might cost an innocent his or her life, but the latter
may kill hundreds of thousands. Policymakers may focus on the larger
weapons, recognizing the costs of mistake, potentially reducing the risks of
autonomous weapons. However, exactly what payloads autonomous weapons will
have is unclear. In theory, autonomous weapons could be armed with guns,
bombs, missiles, electronic warfare jammers, lasers, microwave weapons,
computers for cyber-attack, chemical weapons agents, biological weapons
agents, nuclear weapons, and everything in between.

What is the weapon targeting? Whether an autonomous weapon is shooting a
tank, a naval destroyer, or a human matters. Current machine learning-based
systems cannot effectively distinguish a farmer from a solider. Farmers
might hold a rifle to defend their land, while soldiers might use a rake to
knock over a gun turret. But even adequate classification of a vehicle is
difficult too, because various factors may inhibit an accurate decision. For
example, in one study, obscuring the wheels and half of the front window of
a bus caused a machine learning-based system to classify the bus as a
bicycle. A tank’s cannon might make it easy to distinguish from a school bus
in an open environment, but not if trees or buildings obscure key parts of
the tank, like the cannon itself.

Perdix drone swarm test. A US Department of Defense swarming drone test.
Credit: US Department of Defense.
How many autonomous weapons are being used? More autonomous weapons means
more opportunities for failure. That’s basic probability. But when
autonomous weapons communicate and coordinate their actions, such as in a
drone swarm, the risk of something going wrong increases. Communication
creates risks of cascading error in which an error by one unit is shared
with another. Collective decision-making also creates the risk of emergent
error in which correct interpretation adds up to a collective mistake. To
illustrate emergent error, consider the parable of the blind men and the
elephant. Three blind men hear a strange animal, an elephant, had been
brought to town. One man feels the trunk and says the elephant is thick like
a snake. Another feels the legs and says it’s like a pillar. A third feels
the elephant’s side and describes it as a wall. Each one perceives physical
reality accurately, if incompletely, but their individual and collective
interpretations of that reality are incorrect. Would a drone swarm conclude
the elephant is an elephant, a snake, a pillar, a wall, or something else
entirely?

RELATED:
If a killer robot were used, would we know?
Where are autonomous weapons being used? An armed, autonomous ground vehicle
wandering a snow-covered Antarctic glacier has almost no chance of killing
innocent people. Not much lives there and the environment is mostly barren
with little to obstruct or confuse the vehicle’s onboard sensors. But the
same vehicle wandering the streets of New York City or Tokyo is another
matter. In cities, the AI system would face many opportunities for error:
trees, signs, cars, buildings, and people all may jam up correct target
assessment.

Sea-based autonomous weapons might be less prone to error just because it
may be easier to distinguish between a military and a civilian ship, with
fewer obstructions, than it is to do the same for a school bus and an
armored personnel carrier. Even the weather matters. One recent study found
foggy weather reduced the accuracy of an AI system used to detect obstacles
on roads to 58 percent compared to 92 percent in clear weather. Of course,
bad weather may also hinder humans in effective target classification, so an
important question is how AI classification compares to human
classification.

How well tested is the weapon? Any professional military would verify and
test whether an autonomous weapon works as desired before putting soldiers
and broader strategic goals at risk. However, the military may not test for
all the complexities that may confound an autonomous weapon, especially if
those complexities are unknown. Testing will also be based on anticipated
uses and operational environments, which may change as the strategic
landscape changes. An autonomous weapon robustly tested in one environment
may break down when used in another. Seattle has a lot more foggy days than
Riyadh, but far fewer sandstorms.

How have adversaries adapted? In a battle involving autonomous weapons,
adversaries will seek to confound operations, which may not be very
difficult. OpenAI—a world-leading AI company—developed a system that can
classify an apple as a Granny Smith with 85.6 percent confidence. Yet, tape
a piece of paper that says “iPod” on the apple, and the machine vision
system concludes with 99.7 percent confidence the apple is an iPod. In one
case, AI researchers changed a single pixel on an image, causing a machine
vision system to classify a stealth bomber as a dog. In war, an opponent
could just paint “school bus” on a tank or, more maliciously, “tank” on a
school bus and potentially fool an autonomous weapon.

How widely available are autonomous weapons? States and non-state actors
will naturally vary in their risk tolerance, based on their strategies,
cultures, goals, and overall sensitivity to moral trade-offs. The easier it
is to acquire and use autonomous weapons, the more the international
community can expect the weapons to be used by apocalyptic terrorist groups,
nefarious regimes, and groups that are just plain insensitive to the error
risk. As Stuart Russell, a professor of computer science at the University
of California, Berkeley, likes to note: “[W]ith three good grad students and
possibly the help of a couple of my robotics colleagues, it will be a term
project to build a weapon that could come into the United Nations building
and find the Russian ambassador and deliver a package to him.” Fortunately,
technical acumen, organization, infrastructure, and resource availability
will limit how sophisticated autonomous weapons are. No lone wolf terrorist
will ever build an autonomous F-35 in his garage.

Autonomous weapon risk is complicated, variable, and multi-dimensional—the
what, where, when, why, and how of use all matter. On the high-risk end of
the spectrum are autonomous nuclear weapons and the use of collaborative,
autonomous swarms in heavily urban environments to kill enemy infantry; on
the low-end are autonomy-optional weapons used in unpopulated areas as
defensive weapons and only used when death is imminent. Where states draw
the line depend on how their militaries and societies balance risk of error
against military necessity. But to draw a line at all requires a shared
understanding of where risk lies.
--------------------------------------------------------------------------

https://www.stm.com.tr/en/kargu-auto...tor-attack-uav

Who We Are
Our Solutions


Innovation
Media
Career
Contact
TR


KARGU®
Rotary Wing Attack Drone Loitering Munition System

KARGU® is a rotary wing attack drone that has been designed for asymmetric
warfare or anti-terrorist operations. It can be carried by a single
personnel in both autonomous and manual modes.

KARGU® can be effectively used against static or moving targets through its
indigenous and real-time image processing capabilities and machine learning
algorithms embedded on the platform.

The system is comprised of the “Rotary-Wing Combat UAV (UCAV)” and “Ground
Control Unit” components.

KARGU®, which is included in the inventory of the Turkish Armed Forces,
enables soldiers to detect and eliminate threats in a region, and can be
used easily by the soldiers in the area without entering the risky areas,
especially in asymmetric terrorist operations and asymmetric warfares.

Kargu Logo

About

Capabilities

Technical Features
Download Brochure


Capabilities | Competencies
Reliable Day and Night Operation
Autonomous and Precise Hit with Minimal Collateral Damage
Different Ammunition Options
Tracking Moving Targets
High Performance Navigation and Control Algorithms
Deployable and Operable by Single Soldier
In-Flight Mission Abort and Emergency Self-Destruction
Platform-tailored, advanced electronic ammunition safety, setup and trigger
systems
Disposal at Adjustable Altitude
Indigenous National Embedded Hardware and Software
Image Processing-Based Control Applications
Embedded and Real-Time Object Tracking, Detection and Classification
Ability to Load Ammunition Prior to Use
10x Optical Zoom
2 Axis Stabilised Indigenous POD
User-Friendly Ground Control Unit interface

Technical Features
Range : 5 km
Endurance : 30 minutes
Mission Altitude : 500 meters
Maximum Altitude : 2.800 meters (MSL)
Maximum Speed : 72 km/hour
Dimensions : 600mm x 600mm x 430mm
Weight : 7.060 grams
Operating Temperature : -20 / + 50 °C
Kargu Teknik Ozellikler



KARGU EN
PDF - 255.06 KB
Other Autonomous Drone Products
TOGAN - Autonomous Multi-Rotor Reconnaissance UAV
ALPAGU - Fixed Wing Loitering Munition System
Autonomous Drone Projects
Swarm Intelligence UAV Project
KERKES Project
----------------------------------------------------------

https://www.stopkillerrobots.org/202...robots-strong/

Opposition to killer robots remains strong — poll
January 28, 2021

Opposition to killer robots remains strong — poll
A new survey in 28 countries finds that more than three in five people
oppose using lethal autonomous weapons systems, commonly called “killer
robots.” 62% of respondents said they oppose the use of lethal autonomous
weapons systems, while 21% support such use and 17% said they were not sure.

Graphic shows the number of respondents who oppose the use of lethal
autonomous weapons.
Opposition was strong for both women (63%) and men (60%) although men are
more likely to favor use of these weapons (26%) compared with women (16%).
Opposition to killer robots was strong across generations and steadily
increased with age, from 54% for those under 35 to 69% for ages 50 to 74.

The survey, conducted in December 2020 by the market research company Ipsos
and commissioned by the Campaign to Stop Killer Robots, indicates that even
with COVID-19 and economic uncertainty dominating headlines in 2020, public
awareness of and sentiment against the development of killer robots remains
steady and strong.

In response to these findings, Mary Wareham, coordinator of the Campaign to
Stop Killer Robots, said:

“States must launch negotiations to create a new treaty to retain meaningful
human control over the use of force. Public opposition to killer robots is
strong, which raises expectations of bold political action to ban them.”

Opposition to killer robots has increased since 2017
Opposition to killer robots increased in 13 of the 26 countries previously
surveyed in 2018, with the biggest increases in Brazil (up 16% points from
2018), Israel (up 12%), Japan (11%), and South Africa (up 7%) followed by
Australia and Sweden (both up 5%). This is the third Ipsos survey in six
years to survey opposition to killer robots. The first survey conducted in
2017 found that only 56% of those surveyed opposed killer robots. By 2020,
opposition increased to 62%.

Graphic shows what concerned respondents the most about lethal autonomous
weapons.
The 2020 Ipsos poll also asked those opposed to killer robots what concerned
them the most. 66% answered that lethal autonomous weapons systems would
“cross a moral line because machines should not be allowed to kill.” More
than half (53%) said killer robots would be “unaccountable” and there is
opposition (42%) due to concerns that killer robots would be subject to
technical failures.

Detailed results by country
The 2020 Ipsos poll surveyed nearly 19,000 people, using samples of 500 to
1,000 people in each of the 28 countries: Argentina, Australia, Belgium,
Brazil, Canada, China, Colombia, France, Germany, Great Britain, Hungary,
India, Israel, Italy, Japan, Mexico, Netherlands, Norway, Peru, Poland,
Russia, South Africa, South Korea, Spain, Sweden, Switzerland, Turkey, and
United States.

A majority of respondents in 26 countries opposed killer robots. The only
countries where a majority of respondents did not oppose killer robots were
France (47%) and India (36%).The strongest opposition was in Sweden (76%),
Turkey (73%), Hungary (70%), Germany (68%), Norway (67%), and Mexico (66%).

Graphic shows the countries with the strongest opposition to killer robots.
In 21 countries, 59% or more of respondents were opposed: Sweden (76%),
Turkey (73%), Hungary (70%), Germany (68%), Norway (67%), Colombia (66%),
Belgium (66%, Mexico (66%), Spain (66%), South Africa (66%), Peru (65%),
Poland (65%), South Korea (65%), Australia (64%), Brazil (62%), Canada
(60%), Switzerland (60%), Argentina (59%), Italy (59%), Japan (59%), and the
Netherlands (59%).

Notably, a majority opposed killer robots in five countries most active in
the development and testing of weapons systems with decreasing levels of
human control: Russia (58%), UK (56%), US (55%), China (53%) and Israel
(53%).

All countries surveyed by Ipsos have participated since 2014 in diplomatic
meetings on concerns raised by lethal autonomous weapons systems. Those
talks have been stalled since November 2020, when the Convention on Certain
Conventional Weapons (CCW) failed to agree on its program of work in 2021.

“Public sentiment against fully autonomous weapons has not diminished,”
Wareham said. “Now’s the time for strong preventive measures, not further
diplomatic inaction.”

For more updates on the Campaign to Stop Killer Robots, subscribe to our
newsletter. Media enquiries can be directed to .
-----------------------------------------------------------------------

https://en.defenceturk.net/indigenou...ears-in-libya/

Indigenous kamikaze drone KARGU by STM appears in Libya
by Fatih Mehmet Küçük May 28, 2020 in Land Forces and Land Systems 0
Indigenous kamikaze drone KARGU by STM appears in Libya
Developed by Savunma Teknolojileri Mühendislik ve Ticaret A.S (STM), a
prominent Turkish defense industry company, KARGU Autonomous Tactical
Multi-Rotor Attack UAV was spotted on Wednesday, May 27 in Ain Zara, south
of Tripoli, Libya.

A number of social media accounts supporting Haftar alleged a drone/UAV
departing from Mitigar Air Base was downed. Yet, as a result of the first
examinations of the photos of the platform allegedly downed made by Defence
Turk, it was revealed that the platform was not downed. Instead, what was
seen on the photo indicated they were only some parts of the drone after
attacking.

Detection of KARGU from photos
Kadir Dogan, author of Defence Turk, stated after his examinations that the
parts in the photos belonged to KARGU.


Parts in Libya
According to Dogan, the parts in the photos;

Foldable rotor,
Foldable arms and legs
Colour
Metal battery compartment
case structure and dimensions of the structure are the parts of KARGU by STM
that was spotted in Libya.

Why is KARGU still intact despite being hit?
When the structure of KARGU kamikaze UAV is examined, the condition of the
front side, where the explosive and the camera are set, can be
distinguished.
In the first examinations by Kadir Dogan, it turned out that the guided
warhead and camera were not visible and these parts became separated from
the rear side after burning / exploding. As a consequence, the explosive on
the front side detonated while acquiring the target. However, the rear side
remained structurally intact. Said to be the explosive head on various
social media accounts, the silvery parts wrapped with folio were indeed the
battery blocks.



Pro-Haftar forces claimed a number of Bayraktar TB2 were downed
The forces affiliated to putschist Khalifa Haftar disinform people on social
media. In the previous months, Haftar supporters claimed they downed 26
Bayraktar TB2, which is a made-up story. When the images were analyzed, it
came to light that some UAVs downed in the region were being carried to
various locations in trucks and rephotographed and share as if they had been
recently downed.

KARGU
KARGU is an Autonomous Tactical Multi-Rotor Attack UAV solution for
asymmetrical warfare and anti-terror campaigns that can be deployed by
single personnel and either autonomous or remote-controlled. KARGU can be
used effectively against stationary or moving targets with real-time image
processing and deep learning algorithms embedded on the platform.

The system is composed of three components: “Rotary Wing Attack UAV”,
“Ground Control Unit” and “UAV Charging Station” In our special news on
IDEF-19, it was stated that, according to the information obtained by
Defence Turk, STM Kargu Attack UAV has reached the last stage in its export
to an unannounced country and has drawn great interest from a few countries
to be exported to.

While STM continues its marketing activities, the company is also in an
attemp to make new developments for the products In this vein, the flight
time of the renewed Kargu-2 has risen from 25 minutes to 30 minutes. In
addition, a variety of design improvements were made on Kargu.
---------------------------------------------------------------------------

https://en.defenceturk.net/back-to-t...us-technology/

Back to the Futu Autonomous Technology
by Seray Güldane May 11, 2020 in Articles 0
Back to the Futu Autonomous Technology
AUTONOMOUS DRONES

Autonomous systems are an integral part of warfare and used for multiple
roles during or prior to an operation. An autonomous aerial vehicle is
capable of collecting intelligence, surveillance or reconnaissance. Even
opted by non- state actors, UAVs have been presumed to lack the tenacity
humans possess. Unreliable as UAVs are said to be, there have hitherto been
a variety of trials to make them a platform that can match wits with humans,
reaping the fruits of AI-powered systems in order to lower the burden and
risks of using humans in the field.

There are three groups of UAVs categorized by STM, a leading Turkish defense
industry company.

Class Category Altitude (ft) Examples
Class I
(Below 150 kg)

Micro 200 Black Widow
Mini 3,000 Malazgirt
Small 5,000 Hermes 90
Class II
(150 to 600 kg)

Tactical 10,000 Karayel, Bayraktar Tactical UAV

Class III

(Above 600 kg)

Medium-altitude long endurance (MALE) 45,000 ANKA, Predator
High-altitude long endurance (HALE) 65,000 Global Hawk
Combat 65,000 X 47-B
Levels of Autonomy
According to NATO, levels of autonomy of UAVs are categorized as:

Level 1: Remotely Controlled System – System reactions and behaviour depend
on operator input Level 2: Automated System – Reactions and behaviour depend
on fixed built-in functionality (preprogrammed)

Level 3: Autonomous non-learning system – Behaviour depends upon fixed
built-in functionality or upon a fixed set of rules that dictate system
behaviour (goal-directed reaction and behaviour).

Level 4: Autonomous learning system with the ability to modify rules
defining behaviours – Behaviour depends upon a set of rules that can be
modified for continuously improving goal directed reactions and behaviours
within an overarching set of inviolate rules/behaviours.

In operating UAVs, the duration of OODA -the loop of orient, observe, decide
and act- loop sets the level an operator may partake in. Level of autonomy
is measured by the moment ‘the button’ is pushed.

Turkey and Autonomous Drones: STM



STM offers a wide range of products with autonomous navigation, learning and
decision making capabilities, both in a single-platform and swarm formation.
STM’s competence on deep learning and computer vision facilitates real-time
object detection, identification, tracking and classification.

Capabilities | Competences

High performance autonomous dispatch and control algorithms
Artificial intelligence
Machine learning
Sophisticated computer vision and deep learning algorithms
Vision-based control
Embedded and real-time object tracking, detection and classification
Real-time obstacle detection and avoidance
Real-time localization and mapping
Platform-tailored, advanced electronic ammunition safety, arm and fire
KARGU | Autonomous Tactical Multi-Rotor Attack
Kargu is a multi-rotor UAV solution that can be deployed and operated by
single personnel. It features autonomous navigation, surveillance and
reconnaissance abilities. The system can also be used to neutralize threats
per operational requirements.

Range: 5 km

Endurance: 10 min

Maximum Altitude: 1000 m

Maximum Speed: 72 km/h

Weight: 6,285 g

ALPAGU | Fixed-Wing Autonomous Tactical Attack


Alpagu is a fixed-wing UAV solution that can be deployed and operated by
single personnel. It features autonomous navigation with surveillance and
reconnaissance capabilities. The system can also be used to neutralize
threats per operational requirements.

Alpagu was

Range: 5 km

Endurance: 10 m

Maximum Altitude: 400 m

Maximum Speed: 80 km/h

Weight: 3,700 g



TOGAN | Autonomous Multi-Rotor Reconnaissance


Togan offers wide range of features such as autonomous navigation, partial
autonomy in decision making and full functionality in GPS-denied
environments. Multiple TOGAN platforms can be controlled by single
personnel.

Range: 5 km

Endurance: 50 m

Maximum Altitude: 400 m

Maximum Speed: 3300 m

Weight: 7,000 g

FUTURE OF UAVs
Operational use of Alpagu has led to some optimizations required for
constant assurance in the field.

With new modifications, ALPAGU is designed to weigh less in order to speed
up the UAV during an operation, with its warhead preserved. It is a
fixed-wing attack UAV which can be operated by a single personnel. Drawing
attention with its capacity of higher altitude and longer endurance, it will
be used without an extra need for air or artillery support, along with its
ability to be fired from single or multiple launchers and easily integrable
into various platforms such as ships or ground vehicles.

ALPAGU and KARGU are self-proven in a combat environment and have widely
been used by Turkish Armed Forces. The successful tests completed and later
results obtained in Syria operations have presented the UAV family on a gold
plate to the world, now a number of countries might request the procurement
of STM UAVs.

STM has planned to organize a swarm of KARGU and ALPAGU, which are kamikaze
drones and additional TOGAN, used for reconnaissance.

Swarm drones may operate by means of improvising human intelligence and
facilitate reconnaissance and surveillance over a monitored field. Yet,
short range and duration may lower the efficacy of UAVs on duty. Still the
human-like specifications of UAVs -no matter what size they are- in
unconventional warfare conditions, determined by “unnoticed interruptions”
and compelling regular or even irregular units to adopt technology- hint at
wider use. Asymmetrical and unprecented operational scale of UAVs discloses
the new trench intelligence, once conducted or measured through the clues
revealed when smokes or paths were tracked.

Seray Gü
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Martinsyde G.100 pics [2/4] - Middle East. 1917. Lieutenant A. T. Cole of the Australian Flying Corps (AFC) in a Martinsyde G.100 aircraft equipped for aerial photography.jpg (1/1) Miloch Aviation Photos 0 September 21st 20 06:08 AM
Microsoft Teaches Autonomous Gliders to Make Decisions on the Fly Bob Soaring 8 August 22nd 17 02:36 PM
Hydrogen autonomous VTOL ?ying car Larry Dighera Piloting 0 April 20th 17 02:01 AM
Autonomous undersea gliders (no kidding!) son_of_flubber Soaring 4 December 13th 13 05:37 PM
Autonomous refueling called successful N2310D Piloting 0 December 23rd 06 07:28 PM


All times are GMT +1. The time now is 05:20 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AviationBanter.
The comments are property of their posters.