Taking Scope: The Natural Semantics of Quantifiers

Interpreting Quantifier Scope Ambiguity: Evidence of Heuristic First, Algorithmic Second Processing
Free download. Book file PDF easily for everyone and every device. You can download and read online Taking Scope: The Natural Semantics of Quantifiers file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Taking Scope: The Natural Semantics of Quantifiers book. Happy reading Taking Scope: The Natural Semantics of Quantifiers Bookeveryone. Download file Free Book PDF Taking Scope: The Natural Semantics of Quantifiers at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Taking Scope: The Natural Semantics of Quantifiers Pocket Guide.

Here, we say that he is coreferential with the noun phrase Cyril. As a result, 14 is semantically equivalent to 15b. Cyril is Angus's dog. Cyril disappeared. Consider by contrast the occurrence of he in 16a. In this case, it is bound by the indefinite NP a dog , and this is a different relationship than coreference. If we replace the pronoun he by a dog , the result 16b is not semantically equivalent to 16a. Angus had a dog but he disappeared. Angus had a dog but a dog disappeared.

Corresponding to 17a , we can construct an open formula 17b with two occurrences of the variable x. We ignore tense to simplify exposition. He is a dog and he disappeared. At least one entity is a dog and disappeared. A dog disappeared. Everything has the property that if it is a dog, it disappears. Every dog disappeared. Although 20a is the standard first-order logic translation of 20c , the truth conditions aren't necessarily what you expect. The formula says that if some x is a dog, then x disappears — but it doesn't say that there are any dogs.

So in a situation where there are no dogs, 20a will still come out true.

Now you might argue that every dog disappeared does presuppose the existence of dogs, and that the logic formalization is simply wrong. But it is possible to find other examples which lack such a presupposition. For instance, we might explain that the value of the Python expression astring. We have seen a number of examples where variables are bound by quantifiers. What happens in formulas such as the following? The scope of the exists x quantifier is dog x , so the occurrence of x in bark x is unbound. Consequently it can become bound by some other quantifier, for example all x in the next formula:.

If all variable occurrences in a formula are bound, the formula is said to be closed. We mentioned before that the Expression object can process strings, and returns objects of class Expression. Each instance expr of this class comes with a method free which returns the set of variables that are free in expr. Recall the constraint on to the north of which we proposed earlier as 10 :.

Refine your editions:

We observed that propositional logic is not expressive enough to represent generalizations about binary predicates, and as a result we did not properly capture the argument Sylvania is to the north of Freedonia. You have no doubt realized that first order logic, by contrast, is ideal for formalizing such rules:. The general case in theorem proving is to determine whether a formula that we want to prove a proof goal can be derived by a finite sequence of inference steps from a list of assumed formulas.

First, we parse the required proof goal and the two assumptions. Then we create a Prover9 instance , and call its prove method on the goal, given the list of assumptions. Happily, the theorem prover agrees with us that the argument is valid. We'll take this opportunity to restate our earlier syntactic rules for propositional logic and add the formation rules for quantifiers; together, these give us the syntax of first order logic. In addition, we make explicit the types of the expressions involved.

  • 【まとめ買い10個セット品】和食器 エ [A]こね鉢 [A]こね鉢 黒内朱尺寸 業務用【キャンセル/返品不可】【厨房館 激安】:業務用厨房機器の飲食店厨房館?
  • The Principles of Naval Architecture Series: Ship Resistance and Flow;
  • GeoSpatial Semantics: First International Conference, GeoS 2005, Mexico City, Mexico, November 29-30, 2005. Proceedings!
  • The Last Place (Tess Monaghan Mysteries - Book 07 - 2002)!
  • Islam & Christianity: A revealing Contrast.
  • Murder at Monticello (Mrs. Murphy, Book 3);
  • Analyzing the Meaning of Sentences.

In this case, we say that n is the arity of the predicate. Table 3. We have looked at the syntax of first-order logic, and in 4 we will examine the task of translating English into first-order logic. Yet as we argued in 1 , this only gets us further forward if we can give a meaning to sentences of first-order logic. In other words, we need to give a truth-conditional semantics to first-order logic.

From the point of view of computational semantics, there are obvious limits in how far one can push this approach. Although we want to talk about sentences being true or false in situations, we only have the means of representing situations in the computer in a symbolic manner. Despite this limitation, it is still possible to gain a clearer picture of truth-conditional semantics by encoding models in NLTK. In the models we shall build in NLTK, we'll adopt a more convenient alternative, in which Val P is a set S of pairs, defined as follows:. Such an f is called the characteristic function of S as discussed in the further readings.

Relations are represented semantically in NLTK in the standard set-theoretic way: as sets of tuples. For example, let's suppose we have a domain of discourse consisting of the individuals Bertie, Olive and Cyril, where Bertie is a boy, Olive is a girl and Cyril is a dog. For mnemonic reasons, we use b , o and c as the corresponding labels in the model.

We can declare the domain as follows:. We will use the utility function Valuation. So according to this valuation, the value of see is a set of tuples such that Bertie sees Olive, Cyril sees Bertie, and Olive sees Cyril. Your Turn: Draw a picture of the domain of m and the sets corresponding to each of the unary predicates, by analogy with the diagram shown in 1.

You may have noticed that our unary predicates i. This is a convenience which allows us to have a uniform treatment of relations of any arity. In our models, the counterpart of a context of use is a variable assignment. This is a mapping from individual variables to entities in the domain. Assignments are created using the Assignment constructor, which also takes the model's domain of discourse as a parameter. We are not required to actually enter any bindings, but if we do, they are in a variable , value format similar to what we saw earlier for valuations.

In addition, there is a print format for assignments which uses a notation closer to that often found in logic textbooks:. Let's now look at how we can evaluate an atomic formula of first-order logic. First, we create a model, then we call the evaluate method to compute the truth value. What's happening here? We are evaluating a formula which is similar to our earlier examplle, see olive, cyril.

However, when the interpretation function encounters the variable y , rather than checking for a value in val , it asks the variable assignment g to come up with a value:. Since we already know that individuals o and c stand in the see relation, the value True is what we expected. In this case, we can say that assignment g satisfies the formula see olive, y.

Download options

By contrast, the following formula evaluates to False relative to g — check that you see why this is. In our approach though not in standard first-order logic , variable assignments are partial. For example, g says nothing about any variables apart from x and y. The method purge clears all bindings from an assignment. If we now try to evaluate a formula such as see olive, y relative to g , it is like trying to interpret a sentence containing a him when we don't know what him refers to.

In this case, the evaluation function fails to deliver a truth value. Since our models already contain rules for interpreting boolean operators, arbitrarily complex formulas can be composed and evaluated. The general process of determining truth or falsity of a formula in a model is called model checking. One of the crucial insights of modern logic is that the notion of variable satisfaction can be used to provide an interpretation to quantified formulas. Let's use 24 as an example. When is it true? Let's think about all the individuals in our domain, i.

We want to check whether any of these individuals have the property of being a girl and walking. In fact, o is such a u :. One useful tool offered by NLTK is the satisfiers method. This returns a set of all the individuals that satisfy an open formula. The method parameters are a parsed formula, a variable, and an assignment. Here are a few examples:. It's useful to think about why fmla2 and fmla3 receive the values they do.

Since neither b Bertie nor c Cyril are girls, according to model m , they both satisfy the whole formula. And of course o satisfies the formula because o satisfies both disjuncts. Now, since every member of the domain of discourse satisfies fmla2 , the corresponding universally quantified formula is also true. Your Turn: Try to figure out, first with pencil and paper, and then using m. Make sure you understand why they receive these values. What happens when we want to give a formal representation of a sentence with two quantifiers, such as the following?

There are at least two ways of expressing 26 in first-order logic:. Can we use both of these? The answer is Yes, but they have different meanings. We distinguish between 27a and 27b in terms of the scope of the quantifiers. So now we have two ways of representing the meaning of 26 , and they are both quite legitimate. In other words, we are claiming that 26 is ambiguous with respect to quantifier scope, and the formulas in 27 give us a way to make the two readings explicit. However, we are not just interested in associating two distinct representations with We also want to show in detail how the two representations lead to different conditions for truth in a model.

The admire relation can be visualized using the mapping diagram shown in In 28 , an arrow between two individuals x and y indicates that x admires y. So j and b both admire b Bruce is very vain , while e admires m and m admires e. In this model, formula 27a above is true but 27b is false. One way of exploring these results is by using the satisfiers method of Model objects. This shows that fmla4 holds of every individual in the domain. By contrast, consider the formula fmla5 below; this has no satisfiers for the variable y. That is, there is no person that is admired by everybody.

Taking a different open formula, fmla6 , we can verify that there is a person, namely Bruce, who is admired by both Julia and Bruce. Your Turn: Devise a new model based on m2 such that 27a comes out false in your model; similarly, devise a new model such that 27b comes out true.

  • See a Problem?.
  • 【まとめ買い10個セット品】和食器 エ720-396 [A]こね鉢 黒内朱尺0.5寸 【キャンセル/返品不可】【厨房館】!
  • In this Book.
  • Advancing Regulatory Sci. for Med. Countermeasure Devel. - Inst. of Med.?
  • Introduction to Forensic DNA Evidence for Criminal Justice Professionals.

We have been assuming that we already had a model, and wanted to check the truth of a sentence in the model. By contrast, model building tries to create a new model, given some set of sentences. If it succeeds, then we know that the set is consistent, since we have an existence proof of the model. One option is to treat our candidate set of sentences as assumptions, while leaving the goal unspecified. The following interaction shows how both [a, c1] and [a, c2] are consistent lists, since Mace succeeds in building a model for each of them, while [c1, c2] is inconsistent.

We can also use the model builder as an adjunct to the theorem prover. We can feed this same input to Mace4, and the model builder will try to find a counterexample, that is, to show that g does not follow from S.

Taking Scope

If g fails to follow from S , then Mace4 may well return with a counterexample faster than Prover9 concludes that it cannot find the required proof. Conversely, if g is provable from S , Mace4 may take a long time unsuccessfully trying to find a countermodel, and will eventually give up. Let's consider a concrete scenario.

Our assumptions are the list [ There is a woman that every man loves , Adam is a man , Eve is a woman ].

  1. Astronomy Communication.
  2. Selling Power Sales 2.0 Newsletter - January 13, 2009!
  3. Taking scope : the natural semantics of quantifiers / Mark Steedman - Details - Trove;
  4. Guidlines for Implementing and Evaluating the Portugese Drug Strategy;
  5. Logical form (linguistics) - Wikipedia.
  6. Project MUSE - Taking Scope.
  7. Strategies for scope taking | SpringerLink.

Our conclusion is Adam loves Eve. Can Mace4 find a model in which the premises are true but the conclusion is false? In the following code, we use MaceCommand which will let us inspect the model that has been built. So the answer is Yes: Mace4 found a countermodel in which there is some woman other than Eve that Adam loves. But let's have a closer look at Mace4's model, converted to the format we use for valuations.

Taking Scope

The general form of this valuation should be familiar to you: it contains some individual constants and predicates, each with an appropriate kind of value. What might be puzzling is the C1. This is a "skolem constant" that the model builder introduces as a representative of the existential quantifier. That is, when the model builder encountered the exists y part of a4 above, it knew that there is some individual b in the domain which satisfies the open formula in the body of a4.

However, it doesn't know whether b is also the denotation of an individual constant anywhere else in its input, so it makes up a new name for b on the fly, namely C1. Now, since our premises said nothing about the individual constants adam and eve , the model builder has decided there is no reason to treat them as denoting different entities, and they both get mapped to a.

Moreover, we didn't specify that man and woman denote disjoint sets, so the model builder lets their denotations overlap. This illustrates quite dramatically the implicit knowledge that we bring to bear in interpreting our scenario, but which the model builder knows nothing about. So let's add a new assumption which makes the sets of men and women disjoint.

The model builder still produces a countermodel, but this time it is more in accord with our intuitions about the situation:. On reflection, we can see that there is nothing in our premises which says that Eve is the only woman in the domain of discourse, so the countermodel in fact is acceptable. If we wanted to rule it out, we would have to add a further assumption such as exists y.

At the beginning of the chapter we briefly illustrated a method of building semantic representations on the basis of a syntactic parse, using the grammar framework developed in 9. This time, rather than constructing an SQL query, we will build a logical form. One of our guiding ideas for designing such grammars is the Principle of Compositionality. Principle of Compositionality: The meaning of a whole is a function of the meanings of the parts and of the way they are syntactically combined. We will assume that the semantically relevant parts of a complex expression are given by a theory of syntactic analysis.

Within this chapter, we will take it for granted that expressions are parsed against a context-free grammar. However, this is not entailed by the Principle of Compositionality. Our goal now is integrate the construction of a semantic representation in a manner that can be smoothly with the process of parsing. In 29 , the sem value at the root node shows a semantic representation for the whole sentence, while the sem values at lower nodes show semantic representations for constituents of the sentence. Since the values of sem have to be treated in special manner, they are distinguished from other feature values by being enclosed in angle brackets.

So far, so good, but how do we write grammar rules which will give us this kind of result? Our approach will be similar to that adopted for the grammar sql0. However, in the present case we will use function application rather than string concatenation as the mode of composition. To be more specific, suppose we have a NP and VP constituents with appropriate values for their sem nodes. Then the sem value of an S is handled by a rule like Observe that in the case where the value of sem is a variable, we omit the angle brackets.

From this, we can conclude that? The VP rule says that the parent's semantics is the same as the head child's semantics. The two lexical rules provide non-logical constants to serve as the semantic values of Cyril and barks respectively. There is an additional piece of notation in the entry for barks which we will explain shortly. This provides us with an invaluable tool for combining expressions of first-order logic as we assemble a meaning representation for an English sentence.

In 3 , we pointed out that mathematical set notation was a helpful method of specifying properties P of words that we wanted to select from a document. We illustrated this with 31 , which we glossed as "the set of all w such that w is an element of V the vocabulary and w has property P ". It turns out to be extremely useful to add something to first-order logic that will achieve the same effect. Since we are not trying to do set theory here, we just treat V as a unary predicate.

The corresponding NLTK representation is given in 33c. A couple of English glosses for 33b are: "be an x such that x walks and x chews gum" or "have the property of walking and chewing gum". This is illustrated in 34a and its translation 34b. To walk and chew-gum is hard. Here's a more official version of how abstracts are built:. But what we usually do with properties is attribute them to individuals. In 36 , 33b is predicated of the term gerald.

Now 36 says that Gerald has the property of walking and chewing gum, which has the same meaning as The "reduction" of 36 to 37 is an extremely useful operation in simplifying semantic representations, and we shall use it a lot in the rest of this chapter. This is indeed true, subject to a slight complication that we will come to shortly. Just as 33b plays the role of a unary predicate, 38 works like a binary predicate: it can be applied directly to two arguments. We might try this:.

4 editions of this work

A novel view of the syntax and semantics of quantifier scope that argues for a “ combinatory” theory of natural language syntax. In Taking Scope, Mark Steedman. Download Citation on ResearchGate | Taking Scope: The Natural Semantics of Quantifiers | A novel view of the syntax and semantics of quantifier scope that.

Instead, we need to allow abstraction over variables of higher type. P angus.

[Logic] Free and Bound Variables

P x to each of these terms:. We pointed out earlier that the results of the application should be semantically equivalent. But if we let the free variable x in 39a fall inside the scope of the existential quantifier in 40a , then after reduction, the results will be different:. What has gone wrong here? Clearly, we want to forbid the kind of variable "capture" shown in 41a.

In order to deal with this problem, let's step back a moment. Does it matter what particular name we use for the variable bound by the existential quantifier in the function expression of 40a? The answer is No. For example, exists x. P x and exists y. When we test for equality of VariableBinderExpression s in the logic module i. Suppose, as in the example discussed above, that x is free in a , and that f contains the subterm exists x. In this case, we produce an alphabetic variant of exists x. P x , say, exists z1.

P z1 , and then carry on with the reduction. As you work through examples like these in the following sections, you may find that the logical expressions which are returned have different variable names; for example you might see z14 in place of z1 in the above formula. This change in labeling is innocuous — in fact, it is just an illustration of alphabetic variants. At the start of this section, we briefly described how to build a semantic representation for Cyril barks.

Steedman sets syntax and semantics in the context of human sentence processing as well as the efficient statistical parsing of corpora. One of the most impressive and thought-provoking books on language in many years. Mark Steedman. Jacqueline Wernimont. Joseph M. Reagle, Jr. Search Search. Barwise and Cooper argues that the syntactic structure of quantified sentences in predicate calculus is different from the syntactic structure of the quantified sentences in natural language.

In the article, Barwise and Cooper, discusses the notion of Generalized Quantification and formalizes it. They propose a detailed analysis of the possible implications of Generalized Quantifier theory of natural language. They attest their theory with appropriate examples from English language data. This determiner combines with a set expression i.

Barwise and Cooper argues that there should be a fixed set of contexts that determines the meaning of the basic expressions in the quantified sentences. Barwise and Cooper argues that a quantifier divides up the family of sets provided by the model M. When the quantifiers are combined with some sets it will produce the truth value T and with combing with other sets will produce the truth value F. So they propose a simple formalization. The sentence will have the truth value T, if the set of sneezers contains most babies.

Barwise and Cooper argues that proper names within a NP also acts as a quantifier. The NP, with a proper name represents a family of sets, containing that particular individual denoted by the proper name. They also proposes the syntactic formation rules for L GQ using the logical symbols. Barwise and Cooper, proposes the semantic analysis of L GQ by using the set theoretic notions.

They attests their arguments with English data. Carlson :- According to Carlson, the different meanings of the English bare plural arise because of the manner in which the context of the sentence interacts with the bare plural NPs. These differences arise because of the way in which the context or presupposition set of the sentence interacts with the quantified expressions and the set NPs.

Carlson argues that the interactions of quantified NPs with negation and bare plurals depends on their relative scope properties. He argues that though these quantifiers are quite distinct from each other, it is plausible to represent all of them in a unified manner. She suggests that the quantificational cases are almost paraphrasable by partitives. The only difference between them can be that the restrictor clause of the quantifiers is open-ended set and the partitives involve a definite set. Chierchia Gennaro and Sally McConnell-Ginet :- According to Chierchia and McConnell-Ginet, the quantificational expressions introduce the power to convey generalizations into natural languages.

The quantifiers express the quantity of the individuals in a given domain F1 have a given property. They analyses the quantifier logic in the truth-conditional semantic theory framework. According to Chierchia and McConnell-Ginet, the quantified expressions denote how many different values of the set of entities we have to consider. They argue that the quantified expressions are the generalizing component. They propose that the quantifier sentences are built out of sentences that contain different variable set.

Chierchia and McConnell-Ginet hypothesize a syntactic account of the quantification in relation to c-command and scope interaction. They propose that an occurrence of xn is syntactically bound by a quantifier, such as, Qn iff Qn is the lowest quantifier which c-commands xn. They also provide with a semantic account of the quantification.

They suggests that it is part of the semantics of the Pronouns that they can refer to any individual at all time in a given set , and also can be used with quantifiers to denote something general about such a set. They argues that the Models for the predicate calculus are made up of two things, first is a specification of the sets or the domain of discourse and second a specifications of the extensions of the language constants. They propose the structure of the Model for the predicate calculus in semantics.

According to Chierchia and McConnell-Ginet, the quantification in predicate calculus and the quantification in natural language are connected. They applies the predicate calculus model to English quantificational NPs. They argues that in care of English quantificational NPs, there is a presupposition set that determines the truth-conditionality of the proposition. They analyses the scope taking and binding phenomena in the syntactic structure of the QP which parallels the compositionality of the semantic representation of the natural language Quantification.

Chierchia and McConnell-Ginet also highlight the significance of the Generalized Quantifier approach sets of sets. Szabolsci :- According to Szabolcsi, the quantified expressions of the logical language are different from the quantification in natural language. The syntax of logical language specifies how the quantifier operator combines with expressions to yield new expressions, and the semantics specifies their effects.

Szabolcsi argues that the scope of an operator in logic results from the constituent that it is attached to. But in natural language one has to distinguish between semantic scope and syntactic domain. Szabolcsi argues that the scope of a quantifier A is the property that is asserted to be an element of A on a given derivation of sentences. If the property A incorporates another property B, such as, quantifier, negation, model, then A automatically takes scope over B. Szabolcsi argues that natural language quantifies over times and worlds in a syntactically explicit manner.

Szabolcsi addresses some issues in Generalized Quantifier Theory that poses problem of analysis. However, Szabolcsi, argues that those problems arise due to the absence of fully articulated compositional analysis. Szabolcsi suggests that the scope of an operator such as, a quantifier, is the domain within which the operator has the potentiality to affect the interpretation of other expressions. She argues that the notion of scope is similar in both logic and logical syntax.

Szabolcsi argues that the surface structure S, directly determines the scope interactions between different operators, such as, quantifiers, pronouns, negative polarity items etc. She presents an analysis of the scope interactions of the quantifier phrases in natural languages. If both are in the domain of the other, the structure is potentially ambiguous.