☝️ NO submission is required for labs
In this lab, you'll build the first part of an app that will allow users to view Tumblr posts. You'll work in collaborative pairs–Pair Programming–to apply the skills you've learned so far building your Flix App Assignment. Just like the Flix App, you'll create a network request to fetch data from the web, however instead of movies, you'll be getting blog post data from the Tumblr API.
The checkpoints below should be implemented as pairs. In pair programming, there are two roles: navigator and driver.
ViewController.swift
file and create a custom view controller file named PhotosViewController.PhotosViewController
.Send a test request to the Tumblr API:
response
dictionary contains two dictionaries, blog
and posts
. posts
is a giant array of dictionaries where each dictionary represents a post and all of it's associated information using key value pairs.photos
-> original_size
-> url
. This will require us to dig into these nested elements and cast to appropriate types as we go.Create a property in your PhotosViewController class to store posts
This property will store the data returned from the network request. It's convention to create properties near the top of the view controller class where we create our outlets.
Looking at the sample request in the browser, we see the photos
key returns an array of dictionaries so that's what type we will make our property.
We will initialize it as an empty array so we don't have to worry about it ever being nil later on.
// 1. 2. 3.
var posts: [[String: Any]] = []
Create a request to the Tumblr Photo Posts Endpoint for the, Humans of New York blog by adding the following network request snippet to the PhotoViewController's viewDidLoad()
method.
url
specifies which API and to go to and what set of data to retrieve.dataDictionary
to the console should look very similar to your test request in your browser.
// Network request snippet
let url = URL(string: "https://api.tumblr.com/v2/blog/humansofnewyork.tumblr.com/posts/photo?api_key=Q6vHoaVm5L1u2ZAW1fqv3Jw48gFzYVg9P0vH0VHl3GVy6quoGV")!
let session = URLSession(configuration: .default, delegate: nil, delegateQueue: OperationQueue.main)
session.configuration.requestCachePolicy = .reloadIgnoringLocalCacheData
let task = session.dataTask(with: url) { (data, response, error) in
if let error = error {
print(error.localizedDescription)
} else if let data = data,
let dataDictionary = try! JSONSerialization.jsonObject(with: data, options: []) as? [String: Any] {
print(dataDictionary)
// TODO: Get the posts and store in posts property
// TODO: Reload the table view
}
}
task.resume()
Get the posts and store in your posts
property
Use the respective keys and bracket notation to dig into the nested dictionaries.
Cast the value returned from each key to it's respective type.
// Get the dictionary from the response key
let responseDictionary = dataDictionary["response"] as! [String: Any]
// Store the returned array of dictionaries in our posts property
self.posts = responseDictionary["posts"] as! [[String: Any]]
Add and Configure a Table View in PhotosViewController:
numberOfRowsInSection
method simply tells the table view how many rows, or cells in this case, to create. How many cells do we want? Well, as many as we have posts. We can get that number by calling the count
method on our posts
array. So, instead of returning a hard coded number like 5
we will want to return posts.count
. This is where you can get into trouble if posts
contains nil
which is why we initialized posts
as an empty array because although empty, it is not nil
.Add and Configure a Custom Table View Cell:
tableView(_:cellForRowAt:)
method we will change let cell = UITableViewCell()
to…let cell = tableView.dequeueReusableCell(withIdentifier: "PhotoCell", for: indexPath) as! PhotoCell
Setup the Image View in your Custom Cell:
UIImageView
. Make sure to create an outlet from the image view to your PhotoCell class and not your PhotosViewController class; after all, we created a the custom cell class to control the properties of our reusable cell. DO NOT name this outlet imageView
to avoid colliding with the default imageView
property of the UITableViewCell
base class.Get the post that corresponds to a particular cell's row:
tableView(_:cellForRowAt:)
method is called each time a cell is made or referenced. Each cell will have a unique indexPath.row
, starting at 0
for the first cell, 1
for the second and so on. This makes the indexPath.row
very useful to pull out objects from an array at particular index points and then use the information from a particular object to populate the views of our cell.tableView(_:cellForRowAt:)
method, pull out a single post
from our posts
arraylet post = posts[indexPath.row]
Get the photos dictionary from the post:
nil
value for an element in the photos
array, i.e. maybe no photos exist for a given post. We can check to make sure it is not nil
before unwrapping. We can check using a shorthand swift syntax called if letpost
is a dictionary containing information about the post. We can access the photos
array of a post
using a key and subscript syntax.// 1. // 2. // 3.
if let photos = post["photos"] as? [[String: Any]] {
// photos is NOT nil, we can use it!
// TODO: Get the photo url
}
Get the images url:
// 1.
let photo = photos[0]
// 2.
let originalSize = photo["original_size"] as! [String: Any]
// 3.
let urlString = originalSize["url"] as! String
// 4.
let url = URL(string: urlString)
Set the image view
We'll be bringing in a 3rd party library to help us display our movie poster image. To do that, we'll use a library manager called CocoaPods. If you haven't already, install CocoaPods on your computer now.
Navigate to your project using the Terminal and create a podfile by running, pod init
.
Add pod 'AlamofireImage'
to your podfile
, this will bring in the AlamofireImage library to your project.
In the Terminal run, pod install
. When it's finished installing your pods, you'll need to close your xcodeproj
and open the newly created xcworkspace
file.
import the AlamofireImage framework to your file. Do this at the top of the file under import UIKit
. This will allow the file access to the AlamofireImage framework.
import AlamofireImage
call the AlamofireImage method, af_setImage(withURL:)
on your image view, passing in the url where it will retrieve the image.
cell.photoImageView.af_setImage(withURL: url!)
Update the table view to display any new information
Our table view will likely be created before we get our photos data back from the network request. Anytime we have fresh or updated data for our table view to display, we need to call:
Do this inside the network request completion handler, right after we load the data we got back into our posts
property.
self.tableView.reloadData()
If your app crashes with the exception: Unknown class PhotosViewController in Interface Builder file
, try following the steps in this stackoverflow answer.
Compile Error: "No such module AlamofireImage"
Extend your Tumblr app by building a Detail View. To do that, you'll be passing data between view controllers and implementing push (show) navigation.
This milestone will have you work with Navigation Controllers and transitions between screens while passing data. Review our quickstart for Navigation Controllers and Navigation Controllers guide for more info.
Create a new View Controller for the details screen (PhotoDetailsViewController
):
- Create a new Cocoa Touch Class (without a XIB, subclass of UIViewController). (File -> New -> File...
)
- Using the Object Library, add a UIViewController to the Main.storyboard
by dragging it from the Object Library and placing it on the storyboard:
PhotoDetailsViewController
(similar to what we did for the PhotosViewController above).Implement PhotoDetailsViewController
:
- The view should consist of a single UIImageView
.
- PhotoDetailsViewController
should have a single public property for the photo Url.
Embed PhotosViewController
inside of a Navigation Controller to allow it to push the details screen onto the nav stack.
Wire up the tap on the photo to launch into the detail view:
- In the storyboard, ctrl-drag from the cell to the PhotoDetailsViewController and choose "Show".
- In PhotosViewController
, implement the prepareForSegue
method to pass the photo to the details screen.
let vc = segue.destination as! PhotoDetailsViewController
let cell = sender as! UITableViewCell
let indexPath = tableView.indexPath(for: cell)!
- Remove the gray selection effect:
- Implement `tableView:didSelectRowAtIndexPath` inside of `PhotosViewController`
- Get rid of the gray selection effect by deselecting the cell with animation
`tableView.deselectRow(at: indexPath, animated: true)`
The following bonus user stories are optional and meant to serve as an extra challenge if you'd like to take your app further.
numberOfSectionsInTableView
to return the number of posts. numberOfRowsInSection
should now return 1.
cellForRowAtIndexPath
method to now use the section number (instead of the row number) to index into your array of posts.tableView:viewForHeaderInSection:
and tableView:heightForHeaderInSection:
.let headerView = UIView(frame: CGRect(x: 0, y: 0, width: 320, height: 50))
headerView.backgroundColor = UIColor(white: 1, alpha: 0.9)
let profileView = UIImageView(frame: CGRect(x: 10, y: 10, width: 30, height: 30))
profileView.clipsToBounds = true
profileView.layer.cornerRadius = 15;
profileView.layer.borderColor = UIColor(white: 0.7, alpha: 0.8).CGColor
profileView.layer.borderWidth = 1;
// Set the avatar
profileView.af_setImage(withURL: URL(string: "https://api.tumblr.com/v2/blog/humansofnewyork.tumblr.com/avatar")!)
headerView.addSubview(profileView)
// Add a UILabel for the date here
// Use the section number to get the right URL
// let label = ...
return headerView
Hint This section of our table view guide shows how to change the size of the font. See the UILabel documentation for how to change other properties, such as the text color.
FullScreenPhotoViewController
, similar to the photo view in the Facebook app.
PhotoDetailsViewController
to FullScreenPhotoViewController
. Instead you'll need to create a segue from PhotoDetailsViewController
to FullScreenPhotoViewController
and give the segue a unique name.userInteractionEnabled
to true for the imageView).☝️ NO submission is required for labs
In this exercise, you will add and configure Auto Layout constraints to your Flix Movie app in order to achieve a dynamic layout that looks good on multiple device sizes and orientations.
At the end of the exercise your app will look like this:
Add constraints to the table view for Left/Right/Top/Bottom constraints.
Common issue: Pinning to the top of the Top Layout Guide (pictured left) vs. Top of view (pictured right)
Run the app on iPhone 7 Plus, 7 and SE simulators and use your view debug tools to observe the results after adding constraints.
>=
inequality1:3
>=
in equality to avoid unnecessary vertical stretching.You will need bottom inequality constraints for any views that may be the bottom most view. For instance, in movie with a short overview will have the poster view as the bottom most view and the cell should be expanded to fit that view with any minimum spacing determined by the constant of the inequality constraint, >=
. For a long overview, the label will extend down past the bottom of the poster view and should expand the bottom of the cell to accommodate the full size of the label with any minimum spacing set by it's inequality bottom constraint.
Configure inequality constraints for the poster image view and overview label
Set rowHeight and estimatedRowHeight
tableView.rowHeight = UITableViewAutomaticDimension
tableView.estimatedRowHeight = 50
The approach to Auto Layout in the collection view is very similar what we did in the table view cell, the main difference is that collection view cells do not have an automatic resizing property. Any resizing of collection view cells will need to be handled in code using the appropriate collection view datasource, delegate and flowlayout methods. For our use case, we only have a single image view in the cell, so we can just pin the image view on all sides using edge constraints.
☝️ NO submission is required for labs
Tired of boring button-centric UI? Well…in iOS it's easy to implement interactive gestures and fun animations to give your UI some well deserved pop! In this lab you'll build an app that allows users to interact with characters from the iconic video game, Mario Kart, panning, scaling, rotating and then sending them zooming off the screen! 🏎
The user stories for this lab are split up into 3 tiers. Tier 1 stories will introduce the core concepts of working with gestures and animations. Your goal should be to get through Tier 1 stories during your in-class lab time.
After completing Tier 1 stories you'll be able to…
Tier 2 & 3 stories build on the core concepts from tier 1 and yield a more complete and nuanced app. Try out Tier 2-3 outside of class if you really find gestures & animations intriguing or go back to them later if you want to include some of these features in your group project app.
In this user story, we'll leverage a pan gesture recognizer and it's location property to move around the position of our karts.
Add the image assets to your project
app_icon.png
to the Assets.xcassets
-> AppIcon
-> iPhone App @2x
kart_[x]@2x.png
images and background@2x.png
image you downloaded to your Assets.xcassets
folder.
The @2x
in the image file name helps Xcode place the file in the correct resolution slot automatically.
Layout your views
Access the Media Library by long clicking
on the Object Library icon (see gif below) or use the quick key: command
+ shift
+ m
You can duplicate views by holding option
while you click and drag.
Add pan gesture recognizers for kart image views.
Create and connect actions for your pan gesture recognizers.
Creating an action will trigger a method to be called anytime your gesture recognizer recognizes a gesture.
control
+ drag
ing from a gesture recognizer to your view controller swift file to create an action and associated function. You can name the function something like: didPanKartView
⚠️ Drag from the gesture recognizer listed in the Document Outline, NOT from the image view in the storyboard. (See example below)
⚠️ Make sure you set the type to UIPanGestureRecognizer when creating the action. (See example below)
control
+ drag
ing from from each one to the same function you created in the first action.
Connecting all of the gesture recognizers to a single function allows us to reuse our pan logic for each kart view. This is especially useful if we want to add more karts in the future.
Code the logic to move the kart when it's panned.
Access the location property of the pan gesture recognizer.
let location = sender.location(in: view)
^^^
Where should I write this code?
^^^
Any code you want to run during a panning event should be in the body of the function (action) you created for the gesture recognizer.
^^^
^^^
What's sender
?
^^^
When a gesture recognizer is triggered and calls it's associated function (action), it includes itself as the sender
. When you reference the sender
in the body of the function, you are referencing the specific gesture recognizer that was triggered. This is how you access properties of the gesture recognizer like it's location
on the screen as well as the view
it's attached to.
^^^
^^^
What's location
?
^^^
location
is a property of pan gesture recognizer's that tells us where the the user has panned in some area that we specify. In this case, it's the location in reference to the entire screen , aka the view
. Positions of views are described using a data structure called CGPoint
, which consists of an x
and y
coordinates.
^^^
^^^
What's view
?
^^^
All view controllers come with a root view called view
at the top of the view hierarchy. This is the main view that we are adding all of our other views into. In the above line, we are asking for the pan gestures current location within the root view.
^^^
Print the current location returned from the gesture recognizer.
print("Location: x: \(location.x), y: \(location.y)")
📲 RUN YOUR APP and pan on each kart. You should see the position of the gesture recognizer printed out in the console as you pan your finger.
Notice how…
Access the view of the kart that was panned.
let kartView = sender.view!
Each gesture recognizer knows the view
it's attached to. We can ask the gesture recognizer (sender
) for it's view in order to access the specific view that was panned (i.e. which kart image view).
view
property of the sender
using a !
to avoid having to continuously account for it as an optional
value. In our case, it's safe to do so since we can be assured that anyone calling this method will have a view attached and not be nil
.Set the kart view's position to the current position of the gesture recognizer.
kartView.center = location
What's center
❓ - All views have a center
property which describes the point of their position. Like the position
property of the pan gesture recognizer, the center
property is a CGPoint
with values for x
and y
coordinates. The position of the center is in reference to the view that contains the view, this is called the super view.
📲 RUN YOUR APP and see if you can move your kart!
Most of the concepts you applied to get the karts panning will be used in a similar way to scale the karts using a pinch gesture. The main difference is that we will be referencing the pinch gesture's scale property to scale our karts up and down.
Add pinch gesture recognizers to kart views.
Create an action for a pinch gesture recognizer and connect the remaining.
control
+ drag
from a pinch gesture recognizer in the Document Outline to create an action in your view controller swift file.didPinchKartView
ctrl
+ dragging
them each to the same action you created for the first pinch gesture recognizer.Access the scale property of the gesture recognizer that was pinched.
let scale = sender.scale
Similar to the pan gesture recognizer's location
property, pinch gesture recognizers have a scale
property that corresponds to the size of the user's pinch.
Print the scale value to the console
print("scale: \(scale)")
How do you pinch on the simulator?
• Hold down the option
key and you'll see two gray circles appear. Those represent the user's fingers.
• Move the cursor while continuing to hold the option
key until the circles are close together.
Now, Additionally hold down the shift
key and move the two circles over the object you want to pinch.
• Release the shift key, while continuing to hold the option
key, click
on the object you want to pan and (while continuing to hold the click) move the cursor to pinch in and out.
📲 RUN YOUR APP and pinch on each kart. You should see the pinch value of the gesture recognizer printed out in the console as you pinch.
Notice how…
• Pinching on any of the karts calls our pinching function.
• The pinching function is called continuously during a panning event.
• Wherever the pinching starts corresponds to a scale value of 1
Access the view of the kart that was panned.
let kartView = sender.view!
Adjust the scale of the kart view using the scale from the pinch gesture recognizer.
kartView.transform = CGAffineTransform(scaleX: scale, y: scale)
All views have a transform
property which, among other things, allows you to modify the view's scale rotation and translation. The transform
property is of type CGAffineTransform
, which isn't really too important for us besides helping us navigate to handy constructors to make a new transform with whatever modifications we'd like, such as the scale. In the above line, we create the transform with the scale value we get from our pinch gesture recognizer. We'll plug the scale value in for both x
and y
to get a uniform scale in both width and height.
📲 RUN YOUR APP and pinch to scale your karts up and down!
Rotating the kart using a rotation gesture is going to be almost identical to scaling using a pinch. By this point you're really getting the hang of gestures so this should be a synch!
Add rotation gesture recognizers to kart views.
Create an action for a rotation gesture recognizer and connect the remaining.
control
+ drag
from a rotation gesture recognizer in the Document Outline to create an action in your view controller swift file.didRotateKartView
ctrl
+ dragging
them each to the same action you created for the first.Access the rotation property of the gesture recognizer that was rotated.
let rotation = sender.rotation
Similar to the pinch gesture recognizer's scale
property, rotation gesture recognizers have a rotation
property that corresponds to the amount of rotation in the user's gesture.
Print the rotation value to the console
print("rotation: \(rotation)")
📲 RUN YOUR APP and rotate on each kart. You should see the rotate value of the gesture recognizer printed out in the console as you rotate.
• As with all gestures, an actual device is the preferred way to test.
• When using the simulator, the rotation gesture works similar to the pinch gesture, only instead of moving the circles (fingers) in and out, you're moving them in a circular motion.
• Our current setup only allows for one gesture recognizer to work at a time. So, if you make a pinch motion before your rotation, the pinch gesture recognizer will claim the gesture event and the rotation gesture will not be triggered.
• The rotation values don't seem to be in degrees 🤔 They're not…they're in radian! Silly engineers…🤓
Access the view of the kart that was panned.
let kartView = sender.view!
Adjust the rotation of the kart view using the rotation from the pinch gesture recognizer.
kartView.transform = CGAffineTransform(rotationAngle: rotation)
Similar to the approach we used to modify the scale, we'll create a new transform using one of CGAffineTransform
s handy constructors which takes an angle (in radian). We'll then set the transform
property of the view to our rotated transform to rotate the view.
📲 RUN YOUR APP and rotate your karts around and around!
This story will incorporate view animations that will be triggered using a tap gesture recognizer. Incorporating the tap gesture will be almost identical to the last user stories.
Add tap gesture recognizers to kart views.
Create an action for a tap gesture recognizer and connect the remaining.
control
+ drag
from a tap gesture recognizer in the Document Outline to create an action in your view controller swift file.didTapKartView
ctrl
+ dragging
them each to the same action you created for the first pinch gesture recognizer.Set the number of taps required to 2
.
Tap gesture recognizers can be configured to respond to different numbers of taps. You can also configure the number of touches which is how many fingers the user is required to use during the gesture.
Test the tap gesture recognizer.
print("Double tap recognized")
📲 RUN YOUR APP and double tap on each kart. You should see the test statement print out in the console.
Access the view of the kart that was panned.
let kartView = sender.view!
Test moving the kart's position
kartView.center.x += 50
The karts move 50pts on the x-axis, however the movement is more like teleportation then smooth motion. We'll fix that next by using a view animation method.
📲 RUN YOUR APP and see if the karts move!
Animate the movement of the kart's position
View animation is a breeze using one of several animation methods available in the UIView
class. We'll start with the simplest version which allows you to set the duration of the animation and set the end state of the view your animating.
() -> Void
closure is highlighted to expand it and reveal it's body.UIView.animate(withDuration: 0.8) {
// Closure body
kartView.center.x += 50
}
📲 RUN YOUR APP and test out your animation!
Tune your race animation!
Now that you can animate your karts, it's up to you to tune the values to get the perfect race effect
UIView.animate(withDuration: 0.6) {
kartView.center.x += 400
}
📲 RUN YOUR APP and your off to the races! 🏎
Our app has come a long way! We can move, scale, rotate and race our karts off the screen…the only problem is getting them back. In this final tier 1 user story, we'll add a long press gesture to trigger a kart reset.
Store the starting position of the karts.
In order to reset the karts to their original position, we'll need to know where to reset them to. We can store the position of the karts when our app first loads for reference.
We'll want to access these properties in several places in our app, so declare them in a broad scope, i.e. in the same place we created our outlets.
var startingPointKartView0 = CGPoint()
var startingPointKartView1 = CGPoint()
var startingPointKartView2 = CGPoint()
The viewDidLoad method is a great place for any initial setup you need to do before your view controller is presented. As the name suggests, this method is called once all of the view controller's views have been loaded, so it's safe to reference properties from our kart views.
startingPointKartView0 = kartView0.center
startingPointKartView1 = kartView1.center
startingPointKartView2 = kartView2.center
Add a long press gesture recognizer to the background image view.
Create an action for the long press gesture recognizer.
control
+ drag
from the long press gesture recognizer in the Document Outline to create an action in your view controller swift file.didLongPressBackground
Code the logic to reset the kart positions
kartView0.center = startingPointKartView0
kartView1.center = startingPointKartView1
kartView2.center = startingPointKartView2
📲 RUN YOUR APP and see if you can long press to reset your karts.
Animate the resetting of karts.
The current resetting of the kart's is bit abrupt. Let's wrap the resetting logic in a view animation method to smooth it out.
🛑 Reference to property startingPointKartView0
in closure requires explicit self
. to make capture semantics explicit…Insert self
.
self.
before each object yourself.
The requirement of self
in this case has to do with unique properties of closures in Swift and is not important for our purposes at this point.
All you need to know is…
Anytime you are working with animations and you get an error instructing you to add self.
…JUST DO IT! 👟
UIView.animate(withDuration: 0.8) {
self.kartView0.center = self.startingPointKartView0
self.kartView1.center = self.startingPointKartView1
self.kartView2.center = self.startingPointKartView2
}
📲 RUN YOUR APP to see the karts animate as they reset their positions.
Reset karts to their unmodified states
The karts reset to their starting positions just fine, however their transforms are not being reset which is leading to odd behavior if a user has scaled or rotated a kart.
To reset a transform, we just need to create a new unmodified transform and assign it to the view we want to reset, which in this case is our kart views. You can create an unmodified transform using the transform identity
property.
self.kartView0.transform = CGAffineTransform.identity
self.kartView1.transform = CGAffineTransform.identity
self.kartView2.transform = CGAffineTransform.identity
📲 RUN YOUR APP and see if scaled or rotated karts animate back to their unmodified states.
☝️ NO submission is required for labs
In this lab you will build a chat client using Parse to explore the features of its ORM and backend service. We'll explore how to authenticate a user, design schema, and save and retrieve data from a Parse server.
At the end of the exercise your app will look like this:
pod 'Parse'
applicationId
and server
are provided in the code snipped below:
In any Swift file that you're using Parse, add import Parse
to the top of the file.
In the AppDelegate, register Parse in the application:didFinishLaunchingWithOptions:
method:
Parse.initialize(with: ParseClientConfiguration(block: { (configuration: ParseMutableClientConfiguration) in
configuration.applicationId = "CodePath-Parse"
configuration.server = "http://45.79.67.127:1337/parse"
}))
The Login Screen should allow a new user to signup for the chat app or an existing user to login.
Create a new View Controller (or rename the default one) called LoginViewController
.
Add the following views to the login screen:
New user can tap "Sign Up" button to sign up
Existing user can tap "Login" button to login
User sees an alert if either username or password field is empty when trying to sign up
isEmpty
property of a text field's text to check if it's empty. Use the ||
operator to require one condition or the other.User sees an alert with error description if there is a problem during sign up or login
The Chat Screen will allow the user to compose and send a message.
Create a new View Controller, "ChatViewController".
Create a new file, "ChatViewController" as a subclass of UIViewController and associate it with the Chat View Controller in storyboard.
Embed the Chat View Controller in a Navigation Controller (Editor -> Embed In -> Navigation Controller) and set the navigation bar title to "Chat".
Create a Modal segue from the Login View Controller to the navigation controller. Once created, select the Segue and in the Attributes inspector (DJ Slider) give the segue an identifier, "loginSegue".
After a successful sign up or login from the Login View Controller, modally present the Chat View Controller programmatically.
self.performSegue(withIdentifier: "loginSegue", sender: nil)
At the top of Chat View Controller, add a text field and a button to compose a new message
When the user taps the "Send" button, create a new Message of type PFObject and save it to Parse
Use the class name: Message
(this is case sensitive).
let chatMessage = PFObject(className: "Message")
Store the text of the text field in a key called text
. (Provide a default empty string so message text is never nil
)
chatMessage["text"] = chatMessageField.text ?? ""
Call saveInBackground(block:)
and print when the message successfully saves or any errors.
chatMessage.saveInBackground { (success, error) in
if success {
print("The message was saved!")
} else if let error = error {
print("Problem saving message: \(error.localizedDescription)")
}
}
On successful message save, clear the text from the text chat field.
UITableViewDataSource
and conform to the data source protocol by implementing the required methods.Create a refresh function that is run every second.
Query Parse for all messages using the Message
class.
You can sort the results in descending order with the createdAt
field.
query.addDescendingOrder("createdAt")
Once you have a successful response, save the resulting array, [PFObject]
in a property (instance variable) of the Chat View Controller and reload the table view data.
There are several ways to handle chat message text that is to long to fit on a single line. For instance, if we wanted to only support a single line in our label we might set the Line Break property to Truncate Tail or we could set the Autoshrink property and provide a minimum font size. Alternatively, (in the following approach) we can leverage Auto Layout to expand our cell as needed to fit a label that requires multiple lines to fit the text.
0
in the Attributes Inspector. A setting of 0
will allow the label to have as many lines as needed to fit it's text within the space that the label is given.In viewDidLaod(), configure the following table view properties:
// Auto size row height based on cell autolayout constraints
tableView.rowHeight = UITableViewAutomaticDimension
// Provide an estimated row height. Used for calculating scroll indicator
tableView.estimatedRowHeight = 50
When creating a new message, add a key called user
and set it to PFUser.current()
Add a username
label to the Chat cell to display the chat message author's username.
When querying for messages, add an additional query parameter, includeKey(_:)
on the query to instruct Parse to fetch the related user.
query.includeKey("user")
In cellForRow(atIndexPath:), if a chat message has the user property set, set the username label to the user's username. Otherwise
if let user = chatMessage["user"] as? PFUser {
// User found! update username label with username
cell.usernameLabel.text = user.username
} else {
// No user found, set default username
cell.usernameLabel.text = "🤖"
}
Select the Chat View Controller in the storyboard canvas and in the Identity Inspector (Driver's License), set the Storyboard ID to ChatViewController
.
In the AppDelegate, check if there is a current logged in user.
Parse automatically caches the current user on sign up or login. The current user can be accessed using, PFUser.current()
if let currentUser = PFUser.current() {
print("Welcome back \(currentUser.username!) 😀")
// TODO: Load Chat view controller and set as root view controller
...
}
Programmatically load the Chat View Controller and set as root view controller.
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let chatViewController = storyboard.instantiateViewController(withIdentifier: "ChatViewController")
window?.rootViewController = chatViewController
User sees an activity indicator while waiting for authentication.
User can pull to refresh Chat feed
Add an "Adorable Avatar" for each user by requesting an avatar from the Adorable Avatars API.
af_setImage(withURL:)
UIImageView instance method to fetch and set the image at the specified url.Chat Bubble Style Design
Remove table view row outlines in viewDidLoad()
tableView.separatorStyle = .none
Add a view to serve as your speech bubble. Move the label as a subview of the bubble view and re-do autolayout constraints
Set desired color (In code or IB)
Configure rounded edges in code.
bubbleView.layer.cornerRadius = 16
bubbleView.clipsToBounds = true
Expand or contract the cell layout as needed to show the chat message author (user) if it exists
☝️ NO submission is required for labs
Today we'll be building a photo map. It will allow the user to take a photo, tag it with a location, and then see a map with all the previously tagged photos.
PhotoMapViewController
=> This is where you'll add the map in Milestone #2.LocationsViewController
=> This is already implemented and allows you to search Foursquare for a location that you want to drop a photo.FullImageViewController
=> This is where you'll add a fullscreen image in Bonus #2.LocationsViewController
to connect to the Foursquare API:
CLIENT_ID
= QA1L0Z0ZNA2QVEEDHFPQWK0I5F1DE3GPLSNW4BZEBGJXUCFLCLIENT_SECRET
= W2AOE1TYC4MHK5SZYOUGX0J3LVRALMPB4CXT3ZH21ZCPUMCUImplement the PhotoMapViewController to display a map of San Francisco with a camera button overlay.
import MapKit
into PhotoMapViewController
.Create a property in your PhotoMapViewController to store your picked image
var pickedImage: UIImage!
Pressing the camera button should modally present the camera.
UIImagePickerController.isSourceTypeAvailable(.camera)
which will return true if the device supports the camera. See the note here for an example.When the user has chosen an image, in your delegate method you'll want to:
pickedImage
propertytagSegue
. func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
let originalImage = info[UIImagePickerController.InfoKey.originalImage] as! UIImage
let editedImage = info[UIImagePickerController.InfoKey.editedImage] as! UIImage
// Do something with the images (based on your use case)
pickedImage = editedImage
// Dismiss UIImagePickerController to go back to your original view controller
dismiss(animated: true) {
self.performSegue(withIdentifier: "tagSegue", sender: nil)
}
}
After the user selects an image, if you completed Milestone 3, they'll be taken to a view controller where they can pick a location to tag the image with. In this milestone, we'll drop a pin at that location.
annotation.title = String(describing: latitude)
mapView
instance.MKMapViewDelegate
to your PhotoMapViewController's class declaration. The class declaration should look like:class PhotoMapViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate,
LocationsViewControllerDelegate, MKMapViewDelegate
viewDidLoad
, add the following line:mapView.delegate = self
pickedImage
to the annotation view. func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? {
let reuseID = "myAnnotationView"
var annotationView = mapView.dequeueReusableAnnotationView(withIdentifier: reuseID)
if (annotationView == nil) {
annotationView = MKPinAnnotationView(annotation: annotation, reuseIdentifier: reuseID)
annotationView!.canShowCallout = true
annotationView!.leftCalloutAccessoryView = UIImageView(frame: CGRect(x:0, y:0, width: 50, height:50))
}
let imageView = annotationView?.leftCalloutAccessoryView as! UIImageView
// Add the image you stored from the image picker
imageView.image = pickedImage
return annotationView
}
rightCalloutAccessoryView
of type UIButtonType.DetailDisclosure
fullImageSegue
.