Abstract

Anomaly detection is critical for intelligent vehicle (IV) collaboration. Forming clusters/platoons, IVs can work together to accomplish complex jobs that they are unable to perform individually. To improve security and efficiency of Internet of Vehicles, IVs’ anomaly detection has been extensively studied and a number of trust-based approaches have been proposed. However, most of these proposals either pay little attention to leader-based detection algorithm or ignore the utility of networked Roadside-Units (RSUs). In this paper, we introduce a trust-based anomaly detection scheme for IVs, where some malicious or incapable vehicles are existing on roads. The proposed scheme works by allowing IVs to detect abnormal vehicles, communicate with each other, and finally converge to some trustworthy cluster heads (CHs). Periodically, the CHs take responsibility for intracluster trust management. Moreover, the scheme is enhanced with a distributed supervising mechanism and a central reputation arbitrator to assure robustness and fairness in detecting process. The simulation results show that our scheme can achieve a low detection failure rate below 1%, demonstrating its ability to detect and filter the abnormal vehicles.

1. Introduction

Internet of Vehicles (IoV) is an open converged network system supporting human-vehicles-environment cooperation [1]. Fusing multiple advanced terms, such as VANET [2], autonomous driving [3], cloud computing [4], and multiagent system (MAS) [5], this hybrid concept plays a fundamental role towards a cooperative and effective intelligent transport system. An anomaly detection scheme is desirable in an environment filled up with uncertainty. Primarily, the security problem is motivated by the question [6] “How can I trust the information content I receive?” This issue is then decomposed into two subterms: “Is the communication channel via which I receive messages from a sender secure?” and “How can I trust the sender of the messages I receive?” The decomposition allows us to tell the difference between computational trust and behavioral trust. Being a complementary part of computational trust (such as encryption and tamper-proofing), the model of behavioral trust admits information’s imperfection in an open system; therefore, individuals need extra trust-related information in decision-making [7]. Extra trust-related information could be extracted from history reputation or could be elicited from interaction experience between two individuals. Being capable of providing a measurement of trustworthiness, behavioral-based trust management enables intelligent vehicles to improve collaborations by reducing false or malicious behaviors. Anomaly detection technology is the key method to build behavioral trust.

This paper aims to detect anomaly vehicles in autonomous driving environment. As commercial IVs are drawing near, we have to face the facts that vehicles are more and more intelligent. Meanwhile, cyber vehicles are unprecedented and vulnerable when supported by an uncertain and dynamic network [8]. Malicious attacks and information tampering, along with system failures, will directly threaten human lives and properties. Anomaly vehicles include malicious vehicles and incapable vehicles. Malicious vehicles are entities with intentions to make damage in driving environment. Incapable vehicles are not intentionally to give negative influence; however, they may disturb order due to their limited capability. For example, an incapable intelligent vehicle could not behave properly in a rigid and accurate-ordered automatic driving platoon but may behave well in a normal driving pattern. On the other hand, a malicious intelligent vehicle should be forbidden in any situation. To highlight our motivations, we present the following two illustrative scenarios. Scenario 1: in cluster/platoon-based driving, IVs frequently communicate with each other to maintain lateral/longitudinal control. Vehicles with incapability or malicious intention may join the cluster or platoon. Their malicious or false behaviors are very likely to temper/disturb collaboration. In this safety-oriented case, local vehicles should be able to maintain robust intracluster trust to wipe out unqualified vehicles. Scenario 2: in efficiency-oriented case, where IVs need to collaborate in a broad area, they exchange message to presence traffic conditions, request parking plot information through VANET, and even negotiate routes to prevent traffic congestions. None of these three functions would be efficient without trustworthy collaboration. The above two scenarios suggest that a trust management scheme with anomaly detection is urgently need.

Solutions on IoV’s anomaly detection still face many challenges raised by mobility including dynamic vehicle groups, real-time constraints, and intrinsic dynamic property of trust itself, which makes single or static trust measurement ineffective. Considering the mobile nature of vehicles, topology is changing so rapidly that preestablished trust relationships are likely to be invalid. As a result, two nodes need to build up trust in a timely fashion. Moreover, trust is not constant but changing along with different driving situations. An accurate trust should capture the context of interaction and history reputation. For example, a car with good reputation may not be trustworthy when it is over speed. Trust management system therefore calls for the ability to synthesize multiple resources, either from roads or from cloud. The essence of Internet of Vehicles is to obtain more safety and efficiency by integrating multiple infrastructures, networks, and vehicle intelligence. In accordance with this idea, we propose a hybrid approach called Cluster-Based Anomaly Detection (CAD). Figure 1 describes the framework of CAD. CAD is composed of two big components, namely, cluster-based trust component and central reputation component. Cluster-based trust component builds time-fashioned trust to reflect dynamic situation while central reputation component is to evaluate one’s trust from a long-term perspective. These two components interact by evidence uploading and reputation providing. Cluster-based trust component has two major functions, namely, trust-based AP clustering and mutual supervision to maintain the robustness of dynamic trust.

The major contribution of this paper lies in the following two aspects:(i)We identify cluster-based trust and reputation as two major components of anomaly detection. To exploit cluster-based trust, we propose a cluster-based trust evaluation algorithm, which modifies Affinity Propagation Clustering to generate the most trustworthy cluster head based on evaluation and communication. The algorithm runs in a distributed manner and shows robustness to malicious/incapable vehicles.(ii)We adopt a sparse RSU-enhanced reputation provision scheme. Central Arbitrator (CA) collects evidences from sparse RSUs. Then, a reputation system is established to evaluate global and history reputation from accumulated data.

Trust issues stem from secure and social psychology fields and have been growing theoretically in organization management. More recently, as network technology is constantly changing the way people interact, former stable and well-structured organizations are likely to transform into another paradigm featured by agile structures and ad hoc groups. IoV, for example, is a typical agile structure that calls for collaboration among agents. Ramchurn et al. [9] pointed out that “trust pervades multiagent interaction at all levels,” generally including individual-level trust, whereby an agent has some beliefs about the honesty or reciprocative nature of its interaction partners, and system-level trust, whereby the agents in the system are forced to be trustworthy by the rules of encounter that regulate the system. Although various schemes have been investigated, the author noticed that trust at these two levels has been dealt with separately in most times. This insight inspired us to develop a hybrid framework which takes both levels of trust into consideration.

Most existing systems in VANETs use distributed approach. Raya et al. [10] argue that the trust should be attributed to data per se in ephemeral ad hoc networks and proposed a framework for data-centric trust establishment. Their scheme shows high resilience to attackers and could converge to stable right decision. However, Raya’s trust mechanism may make no contribution to reduce attackers in system level; since there is no punishment for cheating, attackers are seldom suppressed. Chen et al. [11] present a decentralized framework combined with message propagation and trust evaluation in VANET. Specifically, trust measurement consists of role-based trust and experience-based trust. It is a good attempt to synthesize static priori trust (role-based trust) with dynamic situational trust (experience-based trust). Nonetheless, they did not take historical reputation into consideration. Rostamzadeh et al. [12] focus on trustworthy information dissemination by assigning trust value to each road segment. The dissemination task is to find a path which consists of a series of safe road segments. Their work is featured by good scalability and thus potential in many applications. DTM2 [13] is a distributed trust model inspired by Job Market model. With the help of third party hardware, system could incent good behaviors and punish malicious behaviors by changing each vehicle’s signal value. To conclude, the decentralized approach is developed under the assumption that there is no centralized third party to evaluate and maintain the trust value.

Recently, the RSU deployment is promoted by intelligent transport system group. Centralized trust management is not an ambiguous goal with the help of RSUs. Centralized approach is able to evaluate trust value from a global and historical view. Therefore, many works have preliminarily emerged centralized trend as a complementary of distributed system. Wang et al. [14] proposed a vertical handoff method, which improves availability of network access. Their method therefore makes contributions to building centralized trust management system. Machado and Venkatasubramanian [15] aim to aggregate advantages of both centralized and distributed trust computation. The authors categorize the messages exchanged in VANET into alerts and reports; alerts are time-critical in response to an incident while reports are evidence to evaluate quality of alerts. RSUs play Central Authority (CA) who keep track of messages and accordingly maintain a global reputation for each vehicle. Their central grading system could efficiently distinguish dishonest nodes in real-life scenarios. Huang et al. [16] utilize identity-based cryptography to integrate entity-based trust and social trust in proxy server. The email interactions among individuals are mined to obtain social trust. Trust measurement should be requested and acquired from this server. One disadvantage of this system, as the author mentioned, is that service may experience long delay due to network latency and the management entities to mine the email source. Such latency problem bothers centralized reputation system. The author then proposes a situation-aware trust architecture for VANETs [17]. A predictive trust setup system is designed to reduce on-the-scene trust setup latency. They also envision that the roadside infrastructure deserves more attention and research.

3. Trust Establishment by Peer Detecting

In this section, we illustrate the establishment of cluster-based trust. To establish trust among IVs, the key is to generate the trustworthy CH. Cluster and its head are generated after several rounds of iteration. The generated CH is an authoritative node managing intracluster trust. One of the cluster algorithms which works by passing messages between nodes is Affinity Propagation (AP). To start, measures of similarities are calculated for each pair; real-valued messages are then exchanged between pairs of nodes until high quality exemplars and corresponding clusters gradually emerge. The schematic is shown in Figure 2.

AP works by passing messages between nodes, which is naturally more suitable for trust establishment than other clustering algorithms because of the following characteristics: transitivity: in trust theory, if has no direct trust with , it could still build an indirect trust relation via to ; likewise, in our AP, makes a judgement about with the help of indirect judgement from other nodes; the primitive AP clustering algorithm therefore well-reflects transitivity, making it fit into trust establishment; asymmetry: trust is not symmetric; that trusts does not guarantee trusts . AP has the ability to cluster by asymmetric “distance measurement”; distributed manner: AP runs in a completely distributed manner, increasing robustness to attacks; moreover, it achieves a much lower average squared error than normal clustering method [18].

The AP algorithm works iteratively. The similarity is sent from to to measure “distance” between a pair. The responsibility is sent from to to tell how eager wants to be CH. The availability is sent from to to tell how eager wants to be i’s CH. The self-responsibility and self-availability both represent accumulated evidence reflecting if is suitable to be CH. The updating process for responsibility and availability in every iteration procedure is illustrated below. More detailed works are [18, 19], which have lain the foundation of our work.

Primitive AP Iteration Process is as follows:

To make real-valued message converge, messages are damped by , , where is a weighing factor that ranges from 0 to 1. When messages converged, a CH is generated:

3.1. UntrustDegree

Our proposed scheme uses the fundamental idea of Affinity Propagation from a trust perspective. In general, AP could detect anomaly vehicles in a group. We design an UntrustDegree function as “distance measurement” for AP algorithm to find “the most trustworthy node,” that is, to find the node which minimizes overall UntrustDegree. The function is automatically calculated by IV. An IV can observe other vehicles’ behaviors and give an UntrustDegree according to its knowledge:

Identity is one item from set denoting real identity of one car and could be represented by a unique digital number. is a vector predefined as some basic values which gives environmental context (e.g., the weather). is a vector recording basic actions that has done recently. With the help of behavior detection technologies [20] or interactive gaming [21], we reasonably assume that IVs are intelligent enough to evaluate each other. The value, , is primarily positive but set negative, namely, , to fit AP algorithm.

The self UntrustDegree, , is initialized to the same value. It should be noted that a higher self-trust degree makes it more likely to become the cluster head. In our final model (discussed in Section 5.2), valid self-trust is set at a value which balances IV’s evaluation and historical reputation. Historical reputation can only be legally announced by CA. When a group of IVs pass by a RSU, RSU will proactively download/broadcast reputations to IVs.

3.2. Mutual Supervisor Model

Each receives responsibility from the neighborhood. Also, broadcasts to the neighborhood to claim how suitable it is to become a CH. However, a malicious/incapable node can cheat/mistake in this message passing process by broadcasting a false . For example, if broadcasts very high to other nodes, it is more likely to be elected CH according to the AP algorithm. We need a mechanism to prevent nodes broadcast false availability or responsibility.

We proposed a supervisor model to alleviate cheating/mistaking in this process. The core of mutual supervisor model is to match with a supervisor . Among moving companions of one vehicle, a supervisor is another IV which can receive almost the same broadcast information by sharing the same wireless channel. A supervisor therefore listens to the supervisee related message to validate availability/responsibility by repeating the calculation of suspicious . The result calculated by itself is . The supervisor ’s calculation result for is . If the two results and have large difference, then this means is very likely to have cheated in message passing process. The integral mechanism of supervisor model is illustrated in Figure 3.

To assure a stable and honest supervisor, we apply Algorithm 1. From this algorithm, we see that has possibility to supervise another only when (1) does not tend to believe and (2) and have small relative mobility. That is, they are stable driving companions. This mechanism builds up mutual supervision relationship between two adversary nodes so that supervisor and supervisee are not likely to collude. More important, it can identify cheating nodes in message passing process.

Input: a supervisor , nearby node’ states (position, speed)
Output: pair(supervisor, supervisee)
(1) For Node    in DSRC(Dedicated Short Range Communication) range
(2) //calculate mobility metric
(3) If   has no supervisor and
(4)    Then Add    in Supervisee Candidate List
(5) End For
(6) For   in Supervisee Candidate List
(7)  If Then    //find the most stable supervisee
(8) End For
(9) Return matched pair(, )     //  supervises  

The input of Algorithm 1 is IV’s state tuples (position, speed). For , running this algorithm will work out a supervisee. For each in DSRC range, calculates mobility metric (lines (1)-(2)); the smaller the metric is, the more similar the two motions are. Thus, a small metric indicates a stable driving companion. If any has no supervisor and (this indicates that does not tend to trust ), adds in Supervisee Candidate List (Line (3)-(4)). After that, chooses the most stable candidate (with the smallest mobility metric) to be the supervisee (lines (6)–(8)). Finally, a pair () is returned (line (9)).

3.3. Generating CH by Message Passing

We try to use a distributed algorithm to reach a consensus among large amounts of opinions. Each maintains a neighbor list . As Table 1 shows, the list consists of for each neighbor . Additionally, also maintains a supervision field for a supervisee .

Generating CH needs several iterations which are periodically triggered by time. Besides, broadcasting and supervising also need a synchronous clock. Hello beacons are broadcast and received to maintain local awareness.

Broadcast and Receive Hello Beacons Process is as follows:(1)For every , each broadcast hello beacon is(2)Each receiving neighbor calculates if they are traveling in the same direction.(3) adds/updates in its neighbor list:

Availability and responsibility messages should be broadcast periodically. We define this period as . Each will calculate and for each neighbor . This value is damped with the previous value stored in the neighbor list. then broadcasts and of all neighbors .

According to mutual supervisor model, the process of calculating and should be supervised. Each IV automatically chooses a supervisee by Supervisor Matching algorithm. Supervisor checks supervisee’s calculation result and releases alert on condition that supervisee’s message is suspicious. The process enhanced by mutual supervisor model is illustrated below.

Supervising and Message Passing Process. For every , each will do the following:(1)It will find a matching supervisee which is prepared for the next ’s iteration. If found in this , it is claimed by . If failed, it will try next .(2)If it hears an alert about , each will ignore ’s messages in this .(3)It will calculate responsibility for each neighbor .(4)It will update with damping factor and store: .(5)It will calculate availability for each neighbor .(6)It will update with damping factor and store: .(7)It will determine if itself is converged to CH: if , then set .(8)It will broadcast Responsibility and Availability array, and .(9)It will supervise : updates and calculates and for .(10) listens to ’s messages: and ; if or , then it will broadcast alert about .

must be small enough to allow algorithm converged within a . We have injected a supervision mechanism into clustering process. Any node that broadcasts false availability and responsibility would very likely be discovered. The punishment to malicious nodes is twofold: first, its message would be ignored by neighbors through alert; second, a malicious behavior would be reported to CA.

In any , there is a generated from . will claim its role and broadcast Final Message, which represents CH’s final evaluation to each cluster member :

FinalMessage is trustworthy since it is sent from CH, which is elected as “the most trustworthy node” by all group members. Built upon FinalMessage, intracluster trust management is relatively reliable to support IVs’ collaborations.

4. Degrading Anomaly by Evidence Evaluation

Reputation-based method has been widely used in web service [22, 23] and cloud computing [24] to enhance system reliability and robustness. We believe this method could also improve system performance in Internet of Vehicles. In this scenario, IVs will observe and evaluate qualities of each other. Moreover, they form evidences and report them to CA. CA is supported by strong storage and computational resources, thus being capable of computing reputation from a global view. A global reputation is valuable for on-the-road IVs to choose potential collaborators. More importantly, reputation can be increased or degraded, as a system-level enforcement, to incent good behaviors as well as to punish bad ones.

IVs leverage “store-upload” mechanism in delivering evidences to a CA. Since RSUs are sparsely deployed, each IV would store evidences in its storage firstly and then upload them when moving into a RSU’s service range. Evidence evaluation lies in the core of reputation. CA is able to make a conclusion on certain behavior by evaluating and merging different pieces of evidences from different individuals. Note that not all evidences are consistent, and not all evidences are trustworthy. For instance, in order to disturb reputation system, a malicious node may report false evidences.

To mathematically model evidence evaluation, assume CA has to decide among several basic behaviors , based on pieces of evidences to which are uploaded from different IVs. Let denote the final judgement on behavior type of . The following three methods are leveraged to get a consensus evaluation, with the ability to filter false evidences.

(A) Majority Voting. The final evaluation accords with the majority. Given counts of each type of observed behaviors, , the behavior type of is defined by

(B) Weighted Voting. For each behavior, this method sums up all the votes value supporting this behavior. The votes are weighed by corresponding trust level . Then, the type with the highest value is final evaluation:

(C) Bayesian Inference. Among the data fusion techniques, Bayesian Inference (BI) is the most popular one used for trust building and managing. To use BI, the a priori probability of each action is firstly assigned. A posterior probability of each action is calculated given a set of evidences using Bayes’ theorem. For ,

Final consensus is the actions type with the maximum posterior probability:

Besides evidence evaluation, reputation evolution rule is another critical issue. An effective reputation system requires appropriate reputation evolving rules. We will discuss rules in Section 5.1.

5. Performance and Analysis

To evaluate performance of our scheme, we ran an extensive simulation in TransModeler with real map and high fidelity data. We use a map of urban area of San Antonio, USA. We feed real macroscopic traffic data, which are measured in critical roads and sections, to reconstruct real traffic scenario. We believe that macroscopic data could reflect traffic dynamic to a high extent. We do not simulate the wireless medium in this case since it is orthogonal to our evaluation. All simulations were performed with approximately 400 vehicles on a 6 miles’ expressway. Five RSUs are sparsely deployed along the expressway as Figure 4. The DSRC range is set at 300 m. Each simulation ran for 600 s; however, only the last 400 s were used for performance metric calculations.

As noted earlier, an IV will be observed and evaluated by neighbor IVs. We use the example with three Basic Behaviors in Table 2. Each behavior causes different interactive trust. According to reputation evolution rules, one behavior deserves change in reputation.

To depict complex malicious/inappropriate behaviors, which are often mixed with different basic behaviors, we simulate several behavior patterns in Table 3. An anomaly node produces one behavior in every . These patterns are simplified to make simulation feasible. We believe they could still well-reflect validity of our designed scheme.

5.1. The Effect of AP Algorithm

Ideally, AP clustering would generate a CH for every on-the-road vehicle. However, a small portion of vehicles, , could be left alone when iterations are finished. There are two major reasons for these nodes: the node could not find a converged CH candidate in its neighborhood and the node itself is the CH but is the only member of cluster. Beside , there are nodes which form normal clusters. In anomaly node-free simulation, several results are shown in Table 2. Covered ratio is a parameter describing how much the clustering results could cover the whole participants:

In anomaly simulation, several results are shown in Table 3. The simulations are ran several times so Figures 5, 6, 7, and 8 are averaged. A trade-off between Covered Ratio and Cluster Member Number could be found through Table 4. The higher the Covered Ratio, the lower the Cluster Member Number.

According to reference [18], Damping Factor is critical for convergence. Different Damping Factors result in different cluster outcomes. In general, a bigger Damping Factor leads to a relatively higher Covered Ratio and a lower Cluster Member Number. We recommend to set so that algorithm tends to come out as an approximate but stable solution.

Another important parameter not mentioned in [18] is Iteration Cycle. Mathematically, the convergence of AP clustering is only influenced by Damping Factor, because the authors implicitly assume AP clustering could always have enough time for iteration. However, we have modified and applied this algorithm for anomaly detection where the communicating topology is constantly changing. Thus, the communication environment could not always provide plenty of time for clustering. So Iteration Cycle should be regarded as a critical parameter. If it is too short, clustering process will not be able to produce enough CHs to cover most nodes. On the other hand, if the cycle is too long, over-iteration will generate too many CHs. To conclude, modified AP clustering is oriented to real application other than a pure math problem and several parameters should be meticulously adjusted for real deployment, among which the most important ones are Damping Factor and Iteration Cycle.

We use two metrics to measure effectiveness of modified AP algorithm.

(1) Direct Influence. We define one Failure as an anomaly node elected to be CH; Failure Rate is to measure direct influence of one anomaly node, also called unsuccessful anomaly detection rate:

(2) Indirect Influence. We define Risk Degree to feature how much potential influence an anomaly has when it is in one cluster:

If an anomaly node becomes CH, UntrustDegree is 0; otherwise, UntrustDegree is referred to CH’s Final Message, which expresses CH’s opinion of each node. Risk Degree could feature the indirect influence of an unqualified node according to its role (CH or member) and UntrustDegree. For example, if an unqualified is admitted into a 20-member cluster, and CH’s final message claims its UntrustDegree is 0.5, then its Risk Degree is 10. If it is admitted into a 5-member cluster, Risk Degree is 2.5. The later risk is much smaller because the anomaly node has fewer potential partners and thus may have fewer threats.

5.2. Comparison of Four Models

Our performance evaluation is based on four models: Primary AP model (PAP), Tempering AP model (TAP), Tempering&Supervising AP model (TSAP), and Converged AP model (CAP). PAP is directly derived from AP clustering algorithm. TAP models the clustering scenario where anomaly nodes could temper/disturb message passing process. In short, TAP considers tempering/disturb behaviors over PAP. To alleviate influence of tempering/disturbing, TSAP model injects Mutual Supervision Model into PAP to identify anomaly nodes. Finally, CAP is a converged model which enhanced TSAP with historical reputation.

As a converged model, CAP combines historical reputation with real-time cluster-based trust. CA collects uploaded evidences from on-road vehicles and uses three techniques to fuse evidences: Majority Voting, Weighted Voting, and Bayesian Inference. For Bayesian Inference, the prior distribution of behaviors and observed results are defined in Table 5.

We assume that anomaly nodes use a random reporting strategy, which means they generate evidences randomly regardless of what other nodes really have done. Normal nodes will always report true evidences. Figure 5 describes the effects of different evidence merging techniques. In this simulation, three techniques are almost equally effective. However, MV and WV are more suitable for data merging since they have less computation overhead. Figure 6 shows four anomaly nodes’ reputation evolves in system. Anomaly nodes would be distinguished and punished by CA.

In a process of iteration, IVs with larger values of Self-UntrustDegree are more likely to be chosen as CH. These values are “preferences.” In PAP/TAP/TSAP, preference is set as median of . However, in CAP where historical reputation is considered by algorithm, ’s preference is calculated by

When ’s reputation is low, is big; preference therefore becomes small (preference is a negative real number), indicating is not suitable to be CH.

Figure 7 shows the comparison of four models. We set unqualified node percentage as variable. We simulate with different percentages ranges from because too high percentage is not realistic.

Generally, when anomaly nodes percentage is low (≤5%), Failure Rate is 0%. As percentage goes up, Failure Rate also goes higher. TAP is a model with tempering/disturbing and no supervision mechanism, so it performs worse than PAP (no tempering/disturbing) and TSAP (tempering/disturbing, supervision model). In contrast, CAP is a converged model (tempering/disturbing, supervision model, and reputation) with a strong defense to anomaly nodes, so it shows the highest robustness among four models. Furthermore, either model could limit failure rate below 1% even when anomaly nodes percentage is up to 25%.

Risk Degree features how much potential influence an anomaly node has when it is in one cluster. Figure 8 shows the Anomaly Percentage-Risk Degree curve for four models. Risk Degree is firstly low when Anomaly Node Percentage is low. However, it suddenly goes to peak when Percentage slightly increases. Finally, it stably declines with increasing percentage. The explanation for this curve is as follows: when Percentage is very low (≤1%), tempering/disturbing is few, and anomaly nodes therefore are easily distinguished by normal nodes. As a result, anomaly nodes are very likely to be left alone. That is, they are excluded from big clusters by AP algorithm. So the overall Risk Degree is low. When Percentage goes higher but not that high (≤5%), this percentage still indicates a “safe environment”; IVs tend to form “big clusters.” However, with more anomaly nodes percentage, more anomaly nodes have chances to join big clusters by more tempering/disturbing. According to formula (13), even one anomaly node in a big cluster would cause a big risk degree. When Percentage increases over 5%, our algorithms tend to be conservative and form “small clusters,” which have fewer cluster members. Fewer members render lower Risk Degree. According to Figure 8, CAP could limit Risk Degree under 4, demonstrating that our trust management is effective on risk control.

6. Conclusion and Future Work

Our system aims to build a trustworthy platform to detect abnormal vehicles. To this end, we modified Affinity Propagation to elect a most trustworthy node, called cluster head, among vehicles. CH maintains trust management during a period until a new CH is elected. We also considered that AP is executed in a distributed manner thus easily tempered by malicious nodes. So we presented a mutual supervision model to tackle tempering behaviors. Lastly, we blend another component, CA, into our system. CA consisted of servers and sparse RSUs and is able to provide historical reputation for better decision-making. Overall, this trust management system could detect and filter anomaly nodes.

In the future, great efforts are needed on both the in-vehicular system and RSUs to strengthen our secure system. These efforts include deploying mobile and local CA using cloud computing techniques, improving intelligence of mutual trust evaluation, and reducing overhead of detection process.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is supported by the National High-Tech Research and Development Program (863) of China under Grant no. 2012AA111601 and 2015 Construction of key discipline under no. 700200253.