Penn Logic and Computation Seminar 2013-2014


  • Harvey Friedman
    Ohio State University

    Wednesday, July 30, 2014, at 10:00 am in DRL A5

    Testing the consistency of mathematics by computer

    We adapt the new Pi01 incompleteness discussed In the first talk to a computer investigation which either provides an inconsistency in certain large cardinal hypotheses or arguably confirms their consistency.


  • Harvey Friedman
    Ohio State University

    Tuesday, July 29, 2014, at 10:00 am in DRL A5

    From Russell's Paradox and Countable Model Theory

    We present natural paths from Russell's Paradox that lead to systems that interpret ZFC and certain large cardinal hypotheses. We also present such paths from countable model theory.


  • Harvey Friedman
    Ohio State University

    Monday, July 28, 2014, at 10:00 am in DRL A5

    Conctete Mathematical Incompleteness

    After Boolean Relation Theory, we have been using order invariant relations and graphs for Pi01 incompleteness from ZFC and certain large cardinal hypotheses. We present the state of the art.


  • Gregory Igusa
    Notre Dame University

    Monday, April 7, 2014, at 3:00 pm in DRL 2C4

    In complexity theory, there is an empirically observed phenomenon, that sometimes a problem might be easy to solve in practice despite being proved to have a high complexity in the worst case. This discrepancy has been frequently dismissed as an unfortunate situation in which practice is not accurately predicted by theory, but there has been recent work in complexity theory to rigorously study this phenomenon. In 1986, Levin introduced "average-case complexity," a form of complexity that takes into account all instances of a problem, not just the most difficult ones. Then, in 2003, Kapovich, Miasnikov, Schupp and Shpilrain introduced "generic-case complexity," a form of complexity that looks at the majority of instances of a problem while completely ignoring the most difficult ones. While investigating generic-case complexity, KMSS noticed that there are unsolvable questions that are generically linear time solvable, and this opens the door to questions about generic computability in general. In 2012, Jockusch and Schupp introduced the purely computability-theoretic notion of "generic computability." A real is generically computable if there is a partial computable function that correctly computes the real on its domain, and whose domain has density 1 (in the sense of asymptotic density). Intuitively, a real is generically computable if there is an algorithm that usually halts, and always gives correct answers. They also introduced a related notion: "coarse computability." A real is coarsely computable if there is a total computable function that is correct about that real on a set of density 1. (An algorithm that always halts, and usually gives correct answers.) To study the degree structure of generic computability, they also define "generic reducibility," a notion of reducibility in which oracles, like computations, are only forced to halt on density 1. Neither generic computability nor coarse computability implies the other, but there are some results that show that, morally, generic computability is a stronger notion than coarse computability. We use generic reducibility to investigate how close these two notions are to each other by investigating the generic degrees of coarsely computable reals. Although coarsely computable reals can be made to be quite difficult to generically compute, the only Turing information that can be coded into them is hyperarithmetic information. In fact, when this is made precise, it can be used to provide a characterization of the hyperarithmetic Turing degrees in terms of generic reduction and coarse computability.


  • Maté Szabo
    Carnegie Mellon University

    Monday, February 31, 2014, at 3:00 pm in DRL 2C4

    In his famous paper from 1936, An Unsolvable Problem of Elementary Number Theory, Alonzo Church identified the intuitive notion of effective calculability with the mathematically precise notion of recursiveness. This proposal, known as Church´s Thesis, has been widely accepted. Only a few people have argued against it. One of them is Laszlo Kalmar, who, in 1957 gave a talk in Amsterdam at theInternational Colloquium "Constructivity in Mathematics," entitled An Argument Against the Plausibility of Church´s Thesis. The talk was published in the conference proceedings in 1959. The aim of this paper is to present and analyze Kalmar´s argument in detail. It is very useful to have an insight into Kalmár´s general, sometimes peculiar, views on the foundations of mathematics. According to him, mathematics not only stems from experience and empirical facts, but even its justification is in part empirical. The development of mathematics, mathematical methods and notions is endless. As a consequence, mathematics cannot have a fixed foundation once and for all. Finally, presenting mathematical results in given, fixed frameworks is useful for precision and clarity, but mathematics is always done on an intuitive level, not in one or many of these frameworks. Kalmar considers Church´s Thesis as a pre-mathematical statement: it cannot be a mathematical theorem or definition, as it identifies a mathematically precise notion with an intuitive one. Thus, his argument against the plausibility of the thesis is also pre-mathematical. Kalmar begins by discussing his understanding of effective calculability, which is less restrictive than Church´s, and questions the "objective meaning" of the notion of uniformity. That allows him to draw some "very unplausible" consequences of the thesis. It "implies the existence of an absolutely undecidable proposition which can be decided." This proposition is absolute in the sense that it is not undecidable relative to a fixed framework as the Gödel sentence is, but it is only one proposition and not an infinite set of propositions as Church´s undecidable problems are. However, the proposition can be decided on an intuitive level. Kalmar´s different understanding of the notions of effective calculability and uniformity were not only motivated by his general views on the foundations of mathematics. His epistemological as well as his political views played a significant role in it. He expressed these views in his talks on the same topic in Hungarian in the 1950s. Within this broader context, Kalmar´s rather short and peculiar paper appears a bit more appealing. However, in the end his argument does not affect Church´s Thesis, given the usual understanding of effective calculability as mechanical procedures.


  • Thomas Kent
    Marywood University

    Monday, March 24, 2014, at 3:00 pm in DRL 2C4

    Enumeration Degrees and $s$-degrees

    As is commonly known, Turing reducibility allows us to query an oracle set for both positive and negative information and gives rise to the familiar Turing degrees. On the other hand, enumeration reducibility can be viewed as allowing only the transmission of positive information. Amazingly enough, restricting the manner in which we can gather information from oracles actually gives rise to a very rich structure known as the enumeration-degrees. In fact, there is a very natural embedding of the Turing degrees into the enumeration degrees and so the enumeration degrees can be viewed as an extension of the Turing degrees. This structure is so rich that the eminent computability theorist, Andrea Sorbi, once described it as being a ``zoo.´´ So pack your bags and come along for an exciting sightseeing tour as we seek out some of the more exotic creatures that exist in this land. If there is time, we will make a short foray in the country of the $s$-degrees to take a peek at what lies behind those distant borders.


  • David Lippel
    Haverford College

    Monday, February 24, 2014, at 3:00 pm in DRL 2C4

    Reverse VC Calculations

    Let F be a family of sets, for example, a uniformly definable semi-algebraic family in real or p-adic n-space. The Vapnik-Chervonenkis (VC) dimension of F is a measurement of the combinatorial complexity of F. Once you know the VC dimension of F, theorems from computational geometry, like the Epsilon-Net Theorem, give nice geometric consequences for F. After introducing VC dimension and the Epsilon-Net Theorem, I will discuss a statistical strategy for reversing the flow of information in this theorem. Instead of starting with knowledge of the VC dimension, we merely hypothesize "dimension=d" for some value d. Then, we observe the geometric behavior of F using computer experiments and compare the observed behavior with the behavior that is predicted by the theorem (under the hypothesis "dimension=d"). If our observed results have sufficiently low probability (conditioned on "dimension=d"), then we can reject the hypothesis "dimension=d" with a high degree of confidence. Ultimately, we hope to use such methods to shed light on conjectures about VC density in the p-adics. This project is joint work with Deirdre Haskell and Nigel Pynn-Coates.


  • Lynn Scow
    Vassar College

    Monday, November 18, 2013, at 3:00 pm in DRL 4C8

    $\I$-indexed indiscernible sets and trees

    Fix any structure $\I$ on an underlying set $I$. An $\I$-indexed indiscernible set is a set of parameters $A = \{a_i : i \in I\}$ where the $a_i$ are same-length finite tuples from some structure $M$ and $A$ satisfies a certain homogeneity condition. In this talk, I will discuss examples of trees $\I$ for which $\I$-indexed indiscernible sets are particularly well-behaved. In particular, we will look at the structure $\I_0 = (\omega^{<\omega},\unlhd,\lx,\wedge)$ where $\unlhd$ is the partial order on the tree, $\wedge$ is the meet in this order, and $\lx$ is the lexicographical order. By a dictionary theorem that I will present in this talk, known results about indiscernibles from model theory yield alternate proofs that certain classes of finite trees are Ramsey.


  • Roman Kossak
    City University of New York

    Monday, November 11, 2013, at 3:00 pm in DRL 4C8

    Model Theory of the Standard Cut

    Much of model theory of arithmetic can be seen as results about structures of the form (M,omega), where M is a nonstandard model of PA and omega is its standard cut. Our joint work in [1] resulted from an attempt to develop a more systematic study arithmetic in the language with the standardness predicate. In the talk, I will give a survey of results and I will outline some proofs. [1] Richard Kaye, Roman Kossak, Tin Lok Wong, Adding standradness to nonstandard models}, in Studies in Weak Arithmetics, edited by Patrick Cegielski, Charalampos Cornaros, and Constantin Dimitracopoulos, CSLI lecture notes; no. 196, pp. 179 - 198, 2013.


  • Peter Freyd
    University of Pennsylvania

    Monday, OCtober 21, 2013, at 3:00 pm in DRL 4C8

    A new approach to constructive analysis


  • Jason Rute
    Pennsylvania State University

    Monday, October 7, 2013, at 3:00 pm in DRL 4C8

    On the computability of rates of metastable convergence

    There are many ways to express that a sequence converges. They range from the most explicit but least uniform---a rate of convergence; to the moderately explicit and moderately uniform---a bound on the number of jumps by epsilon; to the least explicit but most uniform---a bound of metastable convergence (which I will define in this talk). Using proof theory, Kolhenbach showed that uniform metastable bounds can be computably extracted from the proof of a convergence theorem. Using model theory, Avigad and Iovino showed that metastable bounds of a convergence theorem are always uniform---but their methods do not provide a way to compute the bounds. Using computable analysis and computable model theory, I show that not only are the bounds always uniform, but they can computed from the statement of the theorem alone (without regards to the proof).



    Seminars from previous years