Koji Iwanuma

University of Yamanashi, Kōhu, Yamanashi, Japan

Are you Koji Iwanuma?

Claim your profile

Publications (36)5.11 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study considers approximation techniques for frequent itemset mining from data streams (FIM-DS) under resource constraints. In FIM-DS, a challenging problem is handling a huge combinatorial number of entries (i.e., itemsets) to be generated from each streaming transaction and stored in memory. Various types of approximation methods have been proposed for FIM-DS. However, these methods require almost O(2L) space for the maximal length L of transactions. If some transaction contains sudden and intensive bursty events for a short span, they cannot work since memory consumption exponentially increases as L becomes larger. Thus, we present resource-oriented approximation algorithms that fix an upper bound for memory consumption to tolerate bursty transactions. The proposed algorithm requires only O(k) space for a resource-specified constant k and processes every transaction in O(kL) time. Consequently, the proposed algorithm can treat any transaction without memory overflow nor fatal response delay, while the output can be guaranteed to be no false negative under some conditions. Moreover, any (even if false negative) output is bounded within the approximation error which is dynamically determined in a resource-oriented manner. From an empirical viewpoint, it is necessary to maintain the error as low as possible. We tackle this problem by dynamically reducing the original stream. Through experimental results, we show that the resource-oriented approach can break the space limitation of previously proposed FIM-DS methods.
    06/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes techniques for simplifying a propositionalclausal formula during the search process of the satisfiabilitychecking of the formula. Generally, simplification technique hasa trade-off between the checking cost and the effect of it. Ifthe simplification technique is executed during search, then thecost can be problematic. We propose some on-the-flysimplification techniques whose computational cost is negligiblysmall. Hence, these techniques are executed frequently throughoutthe process of a CDCL solver, that is, unit propagation, conflictanalysis, removal of satisfied clauses, etc. The proposedsimplification techniques are based on binary resolvents, whichare derived from unit propagation process, and consist of variousprobing techniques, self-subsuming resolution and on-demandaddition of binary resolvents. The experimental results showthat these simplification techniques can improve the performanceof CDCL solvers.
    Proceedings of the 2013 IEEE 25th International Conference on Tools with Artificial Intelligence; 11/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern explanatory inductive logic programming methods like Progol, Residue procedure, CF-induction, HAIL and Imparo use the principle of inverse entailment (IE). Those IE-based methods commonly compute a hypothesis in two steps: by first constructing an intermediate theory and next by generalizing its negation into the hypothesis with the inverse of the entailment relation. Inverse entailment ensures the completeness of generalization. On the other hand, it imposes many non-deterministic generalization operators that cause the search space to be very large. For this reason, most of those methods use the inverse relation of subsumption, instead of entailment. However, it is not clear how this logical reduction affects the completeness of generalization. In this paper, we investigate whether or not inverse subsumption can be embedded in a complete induction procedure; and if it can, how it is to be realized. Our main result is a new form of inverse subsumption that ensures the completeness of generalization. Consequently, inverse entailment can be reduced to inverse subsumption without losing the completeness for finding hypotheses in explanatory induction.
    Machine Learning 01/2012; 86:115-139. · 1.47 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: CF-induction is a sound and complete procedure for finding hypotheses in full clausal theories. It is based on the principle of Inverse Entailment (IE), and consists of two procedures: construction of a bridge theory and generalization of it. There are two possible ways to realize the generalization task in CF-induction. One uses a single deductive operator, called γ-operator, and the other uses a recently proposed form of inverse subsumption. Whereas both are known to retain the completeness of CF-induction, their logical relationship and empirical features have not been clarified yet. In this paper, we show their equivalence property and clarify the difference on their search strategies, which often leads to significant features on their obtained hypotheses.
    Proceedings of the 21st international conference on Inductive Logic Programming; 07/2011
  • Source
    AI Commun. 01/2010; 23:183-203.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper shows a new method of extracting important words from newspaper articles based on time-sequence information. This word extraction method plays an important role in event sequence mining. TF-IDF is a well-known method to rank word's importance in a document. However, the TF-IDF method never consider the time information embedded in sequential textual data, which is peculiar to newspapers. In this research, we will propose a new word-extraction method, called the TF-IDayF method, which considers time-sequence information, and can extract important/characteristic words expressing sequential events. The TF-IDayF method never use so-called burst phenomenon of topic word occurrences, which has been studied by lots of researchers. The TF-IDayF method is quite simple, but effective and easy to compute in sequential textual mining. We evaluate the proposed method from three points of view, i.e., a semantic viewpoint, a statistical one and a data mining viewpoint through several experiments.
    Transactions of the Japanese Society for Artificial Intelligence 01/2009; 24:488-493.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Explanatory induction and descriptive induction are two main frameworks for induction in logic. Both frameworks, however, have some serious drawbacks: explanatory induction often exhibits an inductive leap problem, and descriptive induction sometimes fails to explain given observations. Circumscriptive induction is a new framework intended to overcome these difficulties by unifying explanatory induction and descriptive induction. In this paper, we study and improve several aspects of circumscriptive induction. First, we reformulate the concepts of inductive leaps and conservativeness. The reformulated conservativeness becomes a partial generalization of the original conservativeness. We give a simple sufficient condition for the reformulated conservativeness and clarify a relationship between correct solutions and conservativeness. Furthermore, we propose a new tractable induction framework, called pointwise circumscriptive induction, which just uses first-order logic with equality in the formulation, and does not demand any second-order computation. Pointwise circumscriptive induction enables us to derive some interesting hypotheses through ordinary resolution performed in a mechanical way.
    Journal of Applied Logic 01/2009; 7:307-317. · 0.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to find non-trivial consequences of an axiom set is useful in many applications of AI such as theorem proving, query answering and nonmonotonic reasoning. SOL (Skip Ordered Linear) calculus is one of the most significant calculi for consequence finding, which is complete for finding the non-subsumed consequences of a clausal theory. In this paper, we propose new complete pruning methods and a practical search strategy for SOL tableaux. These methods are indispensable for practical use in many application areas. The experimental results show that these techniques improve the performance of consequence finding process greatly.
    Proceedings of the LPAR 2008 Workshops, Knowledge Exchange: Automated Provers and Proof Assistants, and the 7th International Workshop on the Implementation of Logics, Doha, Qatar, November 22, 2008; 01/2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: Consequence finding has been recognized as an important technique in many intelligent systems involving inference. In previous work, propositional or first-order clausal theories have been considered for consequence finding. In this paper, we consider consequence finding from a default theory, which consists of a first-order clausal theory and a set of normal defaults. In an extension of a default theory, consequence finding can be done with the generating defaults for the extension. Alternatively, all extensions can be represented at once with the conditional answer format, which represents how a conclusion depends on which defaults. We also propose a procedure for consequence finding and query answering in a default theory using the first-order consequence-finding procedure SOL. In computing consequences from default theories efficiently, the notion of TCS-freeness is most important to prune a large number of irrational tableaux induced by the generating defaults for an extension. In order to simulate the TCS-freeness, the refined SOL calculus called SOL-S(Γ) is adopted using skip preference and complement checking.
    Journal of Intelligent Information Systems 01/2006; 26:41-58. · 0.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new approach, called lemma-reusing, for accelerating SAT based planning and scheduling. Generally, SAT based approaches gen- erate a sequence of SAT problems which become larger and larger. A SAT solver needs to solve the problems until it encounters a satisfiable SAT problem. Many state-of-the-art SAT solvers learn lemmas called con- flict clauses to prune redundant search space, but lem- mas deduced from a certain SAT problem can not ap- ply to solve other SAT problems. However, in certain SAT encodings of planning and scheduling, we prove that lemmas generated from a SAT problem arereusable for solving larger SAT problems. We implemented the lemma-reusing planner (LRP) and the lemma-reusing job shop scheduling problem solver (LRS). The exper- imental results show that LRP and LRS are faster than lemma-no-reusing ones. Our approach makes it possi- ble to use the latest SAT solvers more efficiently for the SAT based planning and scheduling.
    Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling, ICAPS 2006, Cumbria, UK, June 6-10, 2006; 01/2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose two kinds of semi-automatic training-example generation algorithms for rapidly synthesizing a domain-specific Web search engine. We use the keyword spice model, as a basic framework, which is an excellent approach for building a domain-specific search engine with high precision and high recall. The keyword spice model, however, requires a huge amount of training examples which should be classified by hand. For overcoming this problem, we propose two kinds of refinement algorithms based on semi-automatic training-example generation: (i) the sample decision tree based approach, and (ii) the similarity based approach. These approaches make it possible to build a highly accurate domain-specific search engine with a little time and effort. The experimental results show that our approaches are very effective and practical for the personalization of a general-purpose search engine
    2006 IEEE / WIC / ACM International Conference on Web Intelligence (WI 2006), 18-22 December 2006, Hong Kong, China; 01/2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study frequent subsequence extraction from a single very-long data-sequence. First we propose a novel frequency measure, called the total frequency, for counting multiple occurrences of a sequential pattern in a single data sequence. The total frequency is anti-monotonic, and makes it possible to count up pattern occurrences without duplication. Moreover the total frequency has a good property for implementation based on the dynamic programming strategy. Second we give a simple on-line algorithm for a specialized subsequence extraction problem, i.e., a problem with the infinite window-length. This specialized problem is considered to be a relaxation of the general-case problem, thus this fast on-line algorithm is important from the view of practical applications.
    Data Mining, Fifth IEEE International Conference on; 12/2005
  • K. Iwanuma, Y. Takano, H. Nabeshima
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a novel frequency measure, called the total frequency, for counting multiple occurrences of a sequential pattern in a very-long single data sequence. The total frequency satisfies the antimonotonicity property, and makes it possible to count up pattern occurrences without duplication. Moreover the total frequency has a good property for implementation based on the dynamic programming strategy. We also show a preliminary result of our experiment for evaluating the total frequency.
    Cybernetics and Intelligent Systems, 2004 IEEE Conference on; 01/2005
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study an upside-down transformation of a branch in SOL/Connection tableaux and show that SOL/Connection tableaux using the folding-up operation can always accomplish a size-preserving transformation for any branch in any tableau. This fact solves the exponentially-growing size problem caused both by the order- preserving reduction and by an incremental answer computation problem.
    Theoretical Aspects of Computing - ICTAC 2005, Second International Colloquium, Hanoi, Vietnam, October 17-21, 2005, Proceedings; 01/2005
  • IASTED International Conference on Artificial Intelligence and Applications, part of the 23rd Multi-Conference on Applied Informatics, Innsbruck, Austria, February 14-16, 2005; 01/2005
  • Katsumi Inoue, Koji Iwanuma
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper is concerned with a multi-agent system which performs speculative computation under incomplete communication environments. In a master–slave style multi-agent system with speculative computation, a master agent asks queries to slave agents in problem solving, and proceeds computation with default answers when answers from slave agents are delayed. In this paper, we first provide a semantics for speculative computation using default logic. Speculative computation is considered in which reply messages from slave agents to a master are tentative and may change from time to time. In this system, default values used in speculative computation are only partially determined in advance. Next, we propose a procedure to compute speculative computation using a first-order consequence-finding procedure SOL with the answer literal method. The use of a consequence-finding procedure is convenient for updating agents' beliefs according to situation changes in the world. Then, we further refine the SOL calculus using conditional answer computation and skip-preference in SOL. The conditional answer format has a great advantage of explicitly representing how a conclusion depends on tentative replies and defaults. This dependency representation is important to avoid unnecessary recomputation of tentative conclusions. On the other hand, the skip-preference method has the great ability of preventing irrational/redundant derivations. Finally, we implemented a mechanism of process maintenance to avoid duplicate computation when slave agents change their answers. As long as new answers from slave agents do not conflict with any previously encountered situation, the obtained conclusions are never recomputed. We applied the proposed system to the meeting-room reservation problem to see the usefulness of the framework.
    Annals of Mathematics and Artificial Intelligence 01/2004; 42:255-291. · 0.20 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Consequence finding has been recognized as an important technique in many intelligent systems involving inference. In previous work, propositional or first-order clausal theories have been considered for consequence finding. In this paper, we consider consequence finding within a default theory, which consists of a first-order clausal theory and a set of normal defaults. In each extension of a default theory, consequence finding can be performed with the ”generating defaults” for the extension. Alternatively, we consider all extensions in one theory with the ”conditional consequence” format, which explicitly represents how a conclusion depends on which defaults. We also propose a procedure to compute consequences from a default theory based on a first-order consequence-finding procedure SOL. The SOL calculus is then further refined using skip-preference and complement checking, which have a great ability of preventing irrational derivations. The proposed system can be well applied to a multi-agent system with speculative computation in an incomplete communication environment.
    Flexible Query Answering Systems, 6th International Conference, FQAS 2004, Lyon, France, June 24-26, 2004, Proceedings; 01/2004
  • [Show abstract] [Hide abstract]
    ABSTRACT: SOLAR is an efficient first-order consequence finding system based on a connection tableau format with Skip operation. Consequence finding 1,2,3,4 is a generalization of refutation finding or theorem proving, and is useful for many reasoning tasks such as knowledge compilation, inductive logic programming, abduction. One of the most significant calculus of consequence finding is SOL 2. SOL is complete for consequence finding and can find all minimal-length consequences with respect to subsumption. SOLAR (SOL for Advanced Reasoning) is an efficient implementation of SOL and can avoid producing non-minimal/redundant consequences due to various state of the art pruning methods, such as skip-regularity, local failure caching, folding-up (see 5,6).
    10/2003: pages 257-263;
  • Source
    Koji Iwanuma, Katsumi Inoue
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study speculative computation in a master-slave multi-agent system where reply messages sent from slave agents to a master are always tentative and may change from time to time. In this system, default values used in speculative computation are only partially determined in advance. Inoue et al. [8] formalized speculative computation in such an environment with tentative replies, using the framework of a first-order consequence-finding procedure SOL with the well-known answer literal method. We shall further refine the SOL calculus, using conditional answer computation and skip-preference in SOL. The conditional answer format has an great advantage of explicitly representing how a conclusion depends on tentative replies and defaults, both of which are used to derive the conclusion. The dependency representation is significantly important to avoid unnecessary recomputation of tentative conclusions. The skip-preference has the great ability of preventing irrational/redundant derivations.
    Electronic Notes in Theoretical Computer Science. 10/2002;
  • Source
    Koji Iwanuma, Katsumi Inoue
    Computational Logic in Multi-Agent Systems: 3rd International Workshop, CLIMA'02, Copenhagen, Denmark, August 1, 2002, Pre-Proceedings; 01/2002