گروه مهندسی نرمافزار گروه آموزشی مهندسی نرم افزار یکی از مراکز پیشرو در آموزش و پژوهش در زمینهی مهندسی نرم افزار است. با بهرهگیری از اساتید مجرب، امکانات مدرن و تمرکز بر نوآوری، دانشجویان را برای موفقیت در مسیر علمی و حرفهای آماده میکنیم. به ما بپیوندید و بخشی از جامعهای پویا باشید که آینده را شکل میدهد.
مرتب سازی بر اساس: سال انتشار
(نزولی)
Task allocation, as an important issue in multi-agent systems (MAS), is defined as allocating the tasks to the agents such that maximum tasks are performed in minimum time. The vast range of application domains, such as scheduling, cooperation in crisis management, and project management, deal with the task allocation problem. Despite the plethora of algorithms that are proposed to solve this problem in different application domains, research on proposing a formalism for this problem is scarce. Such a formalism can be used as a way for better understanding and analyzing the behavior of real-world systems. In this paper, we propose a new formalism for specifying capability-based task allocation in MAS. The formalism can be used in different application domains to help domain experts better analyze and test their algorithms with more precision. To show the applicability of the formalism, we consider two algorithms as the case studies and formalize the inputs and outputs of these algorithms using the proposed formalism. The results indicate that our formalism is promising for specifying the capability-based task allocation in MAS at a proper level of abstraction. © 2021 IEEE.
Question Answering is a hot topic in artificial intelligence and has many real-world applications. This field aims at generating an answer to the user's question by analyzing a massive volume of text documents. Answer Selection is a significant part of a question answering system and attempts to extract the most relevant answers to the user's question from the candidate answers pool. Recently, researchers have attempted to resolve the answer selection task by using deep neural networks. They first employed the recurrent neural networks and then gradually migrated to convolutional neural networks. Nevertheless, the use of language models, which is implemented by deep neural networks, has recently been considered. In this research, the DistilBERT language model was employed as the language model. The outputs of the Question Analysis part and Expected Answer Extraction component are also applied with [CLS] token output as the final feature vector. This operation leads to improving the method performance. Several experiments are performed to evaluate the effectiveness of the proposed method, and the results are reported based on the MAP and MRR metrics. The results show that the MAP values of the proposed method improved by 0.6%, and the MRR metric is improved by 0.2%. The results of our research show that using a heavy language model does not guarantee a more reliable method for answer selection problem. It also shows that the use of particular words, such as Question Word and Expected Answer word, can improve the performance of the method. © 2020 IEEE.
A modeling language is a way to describe syntax, semantic, and constraints needed for creating models. Defining a Domain Specific Modeling Language (DSML) instead of suing a general-purpose one, increases the productivity of the developer as well as the quality of the resulted model. In this paper, we proposed a DSML for the Mitigation phase of Emergency Response Environments (EREs). We extended the TAO framework based on the TAO provided textual patterns. This paper also involves extending MAS-ML to support the modeling of EREs Mitigation phase. To evaluate this work, a case study is modeled with the proposed modeling language. Higher abstraction level, less effort, and faster development process are results of the proposed modeling language. © 2014 IEEE.
Daher, H. ,
Hoseindoost, S. ,
Zamani, B. ,
Fatemi, A. pp. 35-41
In case of a disaster, planning for pedestrian evacuation from buildings is a major issue since it threatens human lives. To cope with this problem, evacuation plans are developed to ensure efficient evacuation in minimum time. These plans can be very sophisticated according to the complexity of the evacuation environment. This advocates the use of architectures such as Multi-Agent Systems (MAS) to develop the evacuation plans before happening of a real accident. Since developing an evacuation plan using MAS requires considerable effort, finding more efficient approaches is still an open problem. This paper introduces a new approach, based on the model-driven principles, to support developing evacuation plans. The approach includes utilizing a graphical editor for designing evacuation models, automatic generation of the evacuation plan code, as well as running the generated code on a MAS platform. We evaluated our approach using a case study. The results show that our approach provides elevated speed, less effort, high abstraction level, and more flexibility and productivity in developing emergency evacuation plans. © 2020 IEEE.
Authorship Attribution (AA) is a task in which a disputed text is automatically assigned to an author chosen from a list of candidate authors. To this end, a model is trained on a dataset of textual documents with known authors, which can be considered as a multi-class single-label classification task. In this paper, we approach this task differently by extending information retrieval techniques to train an AA model. It is based on weighting the AARR technique, presented in our previous study, to relax the value of term frequency. The efficiency of the proposed solution has been evaluated by conducting several experiments on six datasets. The results show the superiority of the proposed solution by improving the accuracy of IMDB, Gutenberg books, Poetry, Blogs, PAN2011, and Twitter datasets by 33%, 31%, 31%, 19%, 6%, and 1%, respectively, where the average improvement is 19.94% over all datasets. The best accuracy over these datasets is 88%, 82%, 67%, 90%, 65%, and 81% in the same respect. In addition, compared to the baseline system, the computation time of the proposed solution has been improved significantly (21.44X) by employing a dictionary-based indexing technique. © 2021 IEEE.
Hemmat, A. ,
Vadaei, K. ,
Shirian, M. ,
Heydari, M.H. ,
Fatemi, A.
This paper introduces an innovative approach to Retrieval-Augmented Generation (RAG) for video question answering (VideoQA) through the development of an adaptive chunking methodology and the creation of a bilingual educational dataset. Our proposed adaptive chunking technique, powered by CLIP embeddings and SSIM scores, identifies meaningful transitions in video content by segmenting educational videos into semantically coherent chunks. This methodology optimizes the processing of slide-based lectures, ensuring efficient integration of visual and textual modalities for downstream RAG tasks. To support this work, we gathered a bilingual dataset comprising Persian and English mid- to long-duration academic videos, curated to reflect diverse topics, teaching styles, and multilingual content. Each video is enriched with synthetic question-answer pairs designed to challenge pure large language models (LLMs) and underscore the necessity of retrieval-augmented systems. The evaluation compares our CLIP-SSIM-based chunking approach against conventional video slicing methods, demonstrating significant improvements across RAGAS metrics, including Answer Relevance, Context Relevance, and Faithfulness. Furthermore, our findings reveal that the multimodal image-text retrieval scenario achieves the best overall performance, emphasizing the importance of integrating complementary modalities. This research establishes a robust framework for video RAG pipelines, expanding the capabilities of multimodal AI systems for educational content analysis and retrieval. © 2025 IEEE.
Due to the growning use of social networks and the use of viral marketing in these networks, finding influential people to maximize information diffusion is considered. This problem is Influence Maximization Problem on social networks. The main goal of this Problem is to select a set of influential nodes to maximize the influence spread in a social network. Researchers in this field have proposed different algorithms, but finding the influential people in the shortest possible time is still a challenge that has attracted the attention of researchers. Therefore, in this paper, the IMPT-C algorithm is presented with a focus on graph pre-processing in order to reduce the search space based on community structure. The approach of this algorithm is to take advantage of the topological properties of the graph to identify influential nodes. The experiment results indicate that the IMPT-C algorithm has a great influence spread with low run time compared the state-of-the-art algorithms consist least 2.36% improve than PHG in term the influence spread. © 2021 IEEE.
The area of agent-oriented methodologies is maturing rapidly and the time has come to begin drawing together the work of various research groups with the aim of developing the next generation of agent-oriented software engineering methodologies. An important step is to understand the differences between the various key methodologies, and to understand each methodology's strengths, weaknesses, and domains of applicability. In this paper we perform an investigation upon user views, on four well-known methodologies. We extend Tropos, as the most complete one up on users view point, by providing a proper supportive tool for it. © 2006 IEEE.
Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach to mitigate the limitations of large language models (LLMs) in answering domain-specific questions. Previous research has predominantly focused on improving the accuracy and quality of retrieved data chunks to enhance the overall performance of the generation pipeline. However, despite ongoing advancements, the critical issue of retrieving irrelevant information—which can impair a model’s ability to utilize its internal knowledge effectively—has received minimal attention. In this work, we investigate the impact of retrieving irrelevant information in open-domain question answering, highlighting its significant detrimental effect on the quality of LLM outputs. To address this challenge, we propose the Context Awareness Gate (CAG) architecture, a novel mechanism that dynamically adjusts the LLM’s input prompt based on whether the user query necessitates external context retrieval. Additionally, we introduce the Vector Candidates method, a core mathematical component of CAG that is statistical, LLM-independent, and highly scalable. We further examine the distributions of relationships between contexts and questions, presenting a statistical analysis of these distributions. This analysis can be leveraged to enhance the context retrieval process in retrieval-augmented generation (RAG) systems. © 2024 IEEE.
This study focuses on the generation of Persian named entity datasets through the application of machine translation on English datasets. The generated datasets were evaluated by experimenting with one monolingual and one multilingual transformer model. Notably, the CoNLL 2003 dataset has achieved the highest F1 score of 85.11%. In contrast, the WNUT 2017 dataset yielded the lowest F1 score of 40.02%. The results of this study highlight the potential of machine translation in creating high-quality named entity recognition datasets for low-resource languages like Persian. The study compares the performance of these generated datasets with English named entity recognition systems and provides insights into the effectiveness of machine translation for this task. Additionally, this approach could be used to augment data in low-resource language or create noisy data to make named entity systems more robust and improve them. © 2023 IEEE.
This paper introduces an innovative approach using Retrieval-Augmented Generation (RAG) pipelines with Large Language Models (LLMs) to enhance information retrieval and query response systems for university-related question answering. By systematically extracting data from the university’s official website, primarily in Persian, and employing advanced prompt engineering techniques, we generate accurate and contextually relevant responses to user queries. We developed a comprehensive university benchmark, UniversityQuestionBench (UQB), to rigorously evaluate our system’s performance. UQB focuses on Persian-language data, assessing accuracy and reliability through various metrics and real-world scenarios. Our experimental results demonstrate significant improvements in the precision and relevance of generated responses, enhancing user experiences, and reducing the time required to obtain relevant answers. In summary, this paper presents a novel application of RAG pipelines and LLMs for Persian-language data retrieval, supported by a meticulously prepared university benchmark, offering valuable insights into advanced AI techniques for academic data retrieval and setting the stage for future research in this domain. © 2024 IEEE.
DASH, or Dynamic Adaptive Streaming over HTTP, relies on a rate adaptation component to decide on which representation to download for each video segment. A plethora of rate adaptation algorithms has been proposed in recent years. The decisions of which bitrate to download made by these algorithms largely depend on several factors: estimated network throughput, buffer occupancy, and buffer capacity. Yet, these algorithms are not informed by a fundamental relationship between these factors and the chosen bitrate, and as a result, we found that they do not perform consistently in all scenarios, and require parameter tuning to work well under different buffer capacity. In this paper, we model a DASH client as an M/D/l/K queue, which allows us to calculate the expected buffer occupancy given a bitrate choice, network throughput, and buffer capacity. Using this model, we propose QUETRA, a simple rate adaptation algorithm. We evaluated QUETRA under a diverse set of scenarios and found that, despite its simplicity, it leads to better quality of experience (7% - 140%) than existing algorithms. © 2017 Association for Computing Machinery.
Fradet, Pascal ,
Girault, Alain ,
Krishnaswamy, Ruby ,
Nicollin, Xavier ,
Shafiei, A.
Dataflow Models of Computation (MoCs) are widely used in embedded systems, including multimedia processing, digital signal processing, telecommunications, and automatic control. In a dataflow MoC, an application is specified as a graph of actors connected by FIFO channels. One of the most popular dataflow MoCs, Synchronous Dataflow (SDF), provides static analyses to guarantee boundedness and liveness, which are key properties for embedded systems. However, SDF (and most of its variants) lacks the capability to express the dynamism needed by modern streaming applications. In particular, the applications mentioned above have a strong need for reconfigurability to accommodate changes in the input data, the control objectives, or the environment.We address this need by proposing a new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF with transformation rules that specify how the topology and actors of the graph may be reconfigured. Starting from an initial RDF graph and a set of transformation rules, an arbitrary number of new RDF graphs can be generated at runtime. A key feature of RDF is that it can be statically analyzed to guarantee that all possible graphs generated at runtime will be consistent and live. We introduce the RDF MoC, describe its associated static analyses, and outline its implementation. © 2019 EDAA.
With large numbers of geographically dispersed clients, a centralized approach to Internet-based application development is not scalable and also not dependable. This paper presents a decentralized approach to dependable Internet based application development, consisting of a logical structuring of collaborating sub-systems of geographically apart replicated servers. Two implementations of an Internet auction, one using a centralized approach and the other using our decentralized approach, are described. To evaluate the scalability of the two approaches, a number of experiments are performed on these implementations and the results presented here.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (03029743) 3502pp. 9-22
Passive testing of a network protocol is the process of detecting faults in the protocol implementation by passively observing its input/output behaviors (execution trace) without interrupting the normal network operations. In observing the trace, we can focus on the most expected relevant properties of the protocol specification by defining some invariants on the specification and checking them on the trace. While intuitive extraction of the invariants from the protocol requirements with respect to the control portion of the protocol system is relatively simple, taking the data portion into account is difficult. In this paper we propose algorithms for checking the correctness of given invariants on the specification and extracting the required constraints on the variables (data portion). Once we generate the constraints for a given invariant, we can check if the execution trace is confirmed by the specification with respect to the invariant and its constraints. We show the applicability of the algorithm on a case study: the simple connection protocol (SCP). © IFIP 2005.
Torkladani, B. ,
Ghassemi, F. ,
Bakhsh, N.N. ,
Sirjani, M. ,
Ghassemi, F. ,
Bakhsh, N.N. ,
Torkladani, B. ,
Sirjani, M. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 2pp. 3028-3033
Multi agent systems are applied as a solution for distributed IT systems. Organizational concepts are usually applied to analyze and design such systems. Thus, a multi agent system can be seen as an organization which coordinates agent interactions. In this paper we propose a formal model to specify the coordination behavior of a multi agent system organization. This formal model enables the developers to have a cross checking between the agent interactions, the organizational structure and the coordination behavior of the organization. We can also apply this formal model to evaluate the system properties such as security. © 2006 IEEE.
Multi-agent systems are used as a solution for complex and distributed systems. Since agents are autonomous they can be coordinated exogenously by a coordination language Reo. Reo coordinates agents without having any knowledge about agents. We apply organizational concepts to analyze and design such systems. In this paper, we propose a formal model to specify the results achieved during these phases. This formal model helps in designing a coherent and consistent system. The formal model is applied to make the implementation of system by Reo systematically. We will specify and implement system by Reo according to the formal model. This paper also defines how to convert the formal specification to a Reo circuit by providing Reo circuits for the different patterns of interaction protocols and how to compose simpler circuits to support more complex patterns. © 2010.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (03029743) 4749pp. 404-409
In this paper we have proposed an approach to extend the existing service-oriented architecture reference model by taking into consideration the hierarchical human needs model, which can help us in determining the user's goals and enhancing the service discovery process. This is achieved by enriching the user's context model and representing the needs model as a specific ontology. The main benefits of this approach are improved service matching, and ensuring better privacy as required by users in utilizing specific services like profile-matching. © Springer-Verlag Berlin Heidelberg 2007.
2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 203-212
Web services are self-contained, modular units of application logic which provide business functionality to other applications via Internet connections. Several models have been used to compose Web services which are mainly served at specification level and provide static data dependent coordination processes. Hence they can not support reconfigurable dynamic coordination processes in which participant Web services and the coordination process itself will not be known explicitly prior to execution and would be determined dynamically at run time. In this paper we present a framework to coordinate Web services using Reo coordination language. Reo is a channel-based exogenous coordination language which has a formal basis and supports loose coupling, distribution, dynamic reconfiguration and mobility. Given that Web services are inherently loosely coupled and primarily built independently, the channel-based structure of Reo and its reconfigurability will provide a reconfigurable coordination mechanism for Web service composition. The proposed approach is a distributed dynamic orchestration framework which uses Reo channels as a communication means between Web services and benefits from Reo reconfiguration property to provide a dynamic coordination process. Due to data independence property of Reo, the proposed model is a data neutral framework which is mainly focused on coordination. In this paper we also present a number of case studies by using the proposed framework and investigate its pros and cons through these case studies. © 2007 IEEE.
Many definitive and approximate methods have been so far proposed for the construction of an optimal binary search tree. One such method is the use of evolutionary algorithms with satisfactorily improved cost efficiencies. This paper will propose a new genetic algorithm for constructing a near optimal binary search tree. In this algorithm, a new greedy method is used for the crossover of chromosomes while a new way is also developed for inducing mutation in them. Practical results show a rapid and desirable convergence towards the near optimal solution. The use of a heuristic to create not so costly chromosomes as the first offspring, the greediness of the crossover, and the application of elitism in the selection of future generation chromosomes are the most important factors leading to near optimal solutions by the algorithm at desirably high speeds. Due to the practical results, increasing problem size does not cause any considerable difference between the solution obtained from the algorithm and exact solution.
Applied Mathematics and Computation (00963003) 190(2)pp. 1514-1525
Many definitive and approximate methods have been so far proposed for the construction of an optimal binary search tree. One such method is the use of evolutionary algorithms with satisfactorily improved cost efficiencies. This paper will propose a new genetic algorithm for making a near-optimal binary search tree. In this algorithm, a new greedy method is used for the crossover of chromosomes while a new way is also developed for inducing mutation in them. Practical results show a rapid and desirable convergence towards the near-optimal solution. The use of a heuristic to create not so costly chromosomes as the first offspring, the greediness of the crossover, and the application of elitism in the selection of future generation chromosomes are the most important factors leading to near-optimal solutions by the algorithm at desirably high speeds. Due to the practical results, increasing problem size does not cause any considerable difference between the solution obtained from the algorithm and exact solution. © 2007 Elsevier Inc. All rights reserved.
Scientia Iranica (23453605) 14(6)pp. 631-640
In this paper, reinforcement learning is used in order to model the reputation of buying and selling agents. Two important factors, quality and price, are considered in the proposed model. Each selling agent learns to evaluate the reputation of buying agents, based on their profits for that seller and uses this reputation to dedicate a discount for reputable buying agents. Also, selling agents learn to maximize their expected profits by using reinforcement learning to adjust the quality and price of the products, in order to satisfy the buying agents' preferences. In contrast, buying agents evaluate the reputation of selling agents based on two different factors: Reputation based on quality and price. Therefore, buying agents avoid interacting with disreputable selling agents. In addition, the fact that buying agents can have different priorities on the quality and price of their goods is taken into account. The proposed model has been implemented with Aglet and tested in a large-sized marketplace. The results show that selling/ buying agents that use the proposed algorithms in this paper obtain more satisfaction than the other selling/buying agents. © Sharif University of Technology, December 2007.
Journal of Theoretical and Applied Electronic Commerce Research (07181876) 2(1)pp. 1-17
In this paper, we propose a market model which Is based on reputation and reinforcement learning algorithms for buying and selling agents. Three important factors: quality, price and delivery-time are considered in the model. We take into account the fact that buying agents can have different priorities on quality, price and delivery-time of their goods and selling agents adjust their bids according to buying agents preferences. Also we have assumed that multiple selling agents may offer the same goods with different qualities, prices and delivery-times. In our model, selling agents learn to maximize their expected profits by using reinforcement learning to adjust product quality, price and delivery-time. Also each selling agent models the reputation of buying agents based on their profits for that seller and uses this reputation to consider discount for reputable buying agents. Buying agents learn to model the reputation of selling agents based on different features of goods: reputation on quality, reputation on price and reputation on delivery-time to avoid interaction with disreputable selling agents. The model has been implemented with Aglet and tested in a large-sized marketplace. The results show that selling/buying agents that model the reputation of buying/selling agents obtain more satisfaction rather than selling/buying agents who only use the reinforcement learning. © 2007 Universidad de Talca.
Torkladani, B. ,
Raji, F. ,
Brenjkoub, M. ,
Raji, F. ,
Torkladani, B. ,
Brenjkoub, M. 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025 pp. 534-537
The mobile agent is desired to be able to roam autonomously and anonymously from one agent platform to another one. To achieve this aim, a novel secure protocol is proposed to provide anonymity of the agent owner as well as the agent itinerary. In the presented method, a set of trusted auxiliary hosts named as Mixers are employed to insert a transient fictitious owner in each step of the agent itinerary. The ability of the proposed protocol is analyzed and its resistance against traffic analysis attacks is illustrated. © 2007 IEEE.
Malaysian Journal Of Computer Science (01279084) 20(1)pp. 35-50
When enterprises collaborate, a common frame of understanding of all products and catalogs in each organization is indispensable. Suppliers who virtually collaborate should be able to share product related data, create new products, and update old products from their own catalog. In this paper, we describe the development of a model for presenting electronic catalog based on OWL ontology language. This model uses WordNet ontology to distinguish classes and the relationships between them. We use SPARQL query language to introduce three types of search for the catalog management system. The concepts of this classification system are mapped to the concept of current standard classification systems such as UNSPSC, ECLΣS, and etc. We use VSM to diagnose class of one product. For customization aspect of electronic catalog, we introduce a semantic recommendation procedure which is more efficient when applied to Internet shopping malls. The suggested procedure recommends the semantic products to the customers and is originally based on Web usage mining, product classification, association rule mining, and frequently purchasing. We applied the procedure to the data set of MovieLens Company for performance evaluation, and some experimental results are provided. The experimental results have shown superior performance in terms of coverage and precision.
Lecture Notes in Engineering and Computer Science (20780958) pp. 859-863
Negotiation is a process between self-interested agents in ecommerce trying to reach an agreement on one or multi issues. The outcome of the negotiation depends on several parameters such as the agents' strategies and the knowledge one agent has about the opponents. One way for discovering opponent's strategy is to find the similarity between strategies. In this paper we present a simple model for measuring the similarity of negotiators' strategies. Our measure is based only on the history of the offers during the sessions of negotiation and we use a notion of Levenshtein distance. We implement this measure and experimentally show that the result of using this measure can improve the recognition of negotiation strategy. Also, this measure can be used for modeling behaviors of negotiators and predictive decision-making.
Lecture Notes in Engineering and Computer Science (20780958) pp. 845-849
The development of electronic marketing has contributed to present models and structures for marketing strategies. Personalization is an inseparable portion of electronic marketing and has contributed to models and structures of marketing .The presented personalization models do not have a comprehensive structure and can not cover marketing domains of all services and goods, and also have not a considerable precision for predicting customer's behavior. In this paper, a personalization model is presented based on the theoretical fundamentals of marketing and the known concept of 4P Marketing Mix. In comparison with other models, our mode) will cover all electronic marketing domains of goods and services, and also it will provide a more precision for predicting the customer's behavior.
Communications in Computer and Information Science (18650937) 6pp. 745-748
In this paper, we will introduce a new approach for scoring Farsi (also called Persian) documents in a Persian Search engine. This approach is based on a new stemming method for Farsi language. Our new stemming method works without any dictionary. Evaluation results show significant improvement in performance (precision/ recall) of the Information Retrieval (IR) system using this stemmer. we have combine our stemming method with a mathematical scoring approach named FDS to obtain a powerful scoring policy for relevant documents in a Persian search engine. © 2008 Springer-Verlag.
Simulation and Gaming (1552826X) 39(1)pp. 83-100
Distributed Artificial Intelligence techniques have evolved toward multi-agent systems (MASs) where agents solve specific problems. Bargaining is a challenging area well-explored in both MAS and economics. To make agents more human-like and to increase their flexibility to reach an agreement, the authors investigated the role of personality behaviors of participants in a multi-criteria bilateral bargaining in a single-good e-marketplace, where both parties are OCEAN agents based on the five-factor (Openness, Conscientiousness, Extraversion, Agreeableness, and Negative emotions) model of personality. The authors simulate a computational approach based on a heuristic bargaining protocol and personality model on artificial stereotypes. The results suggest compound behaviors appropriate to gain the best overall utility in the role of buyer and seller and with regard to social welfare and market activeness. This generic personality-based approach can be used as a predictive or descriptive model of human behavior to adopt in areas involving negotiation and bargaining. © 2008 Sage Publications.
This paper presents a model for cultural intelligent agent decision making. The proposed model is based on Schwartz 10 value type. We follow a fuzzy approach for identification of agent's values and cultural dimension. Each cultural value causes a set of behavior that according to its importance is performed by agent. In each situation agent selects nearest situation in comparison with its criteria. These criteria are explored by agent according to cultural values. We use of fuzzy J and aglet for implementation. © 2008 IEEE.
1 2 3 ... 12